Search (277 results, page 1 of 14)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Internet"
  1. Van der Walt, M.: South African search engines, directories and portals : a survey and evaluation (2000) 0.14
    0.14464335 = product of:
      0.21696502 = sum of:
        0.1207787 = weight(_text_:search in 136) [ClassicSimilarity], result of:
          0.1207787 = score(doc=136,freq=18.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.691221 = fieldWeight in 136, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=136)
        0.09618632 = product of:
          0.19237264 = sum of:
            0.19237264 = weight(_text_:engines in 136) [ClassicSimilarity], result of:
              0.19237264 = score(doc=136,freq=10.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.75313926 = fieldWeight in 136, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.046875 = fieldNorm(doc=136)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The purpose of this paper is to identify, describe, evaluate and compare South African search engines, directories and portals. The comparative evaluation entailed analysis of six search engines by means of a checklist of desirable features, as well as a performance test by means of sample searches. The following aspects and features are covered in the checklist: database characteristics, search facilities and techniques, search results and portal services. In the performance test the local search engines were also compared with three international ones. Aardvark was rated the best local search engine judging by its performance in the sample searches, but it was outperformed by two of the international engines, Alta Vista and FAST, with regard to the total number of relevant hits retrieved. The results of the investigation will be of use to searchers in their selection of appropriate search tools and to search engine developers in the process of improving their systems
  2. Drabenstott, K.M.: Web search strategies (2000) 0.14
    0.14373547 = product of:
      0.2156032 = sum of:
        0.08901726 = weight(_text_:search in 1188) [ClassicSimilarity], result of:
          0.08901726 = score(doc=1188,freq=22.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.50944906 = fieldWeight in 1188, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=1188)
        0.12658595 = sum of:
          0.099340804 = weight(_text_:engines in 1188) [ClassicSimilarity], result of:
            0.099340804 = score(doc=1188,freq=6.0), product of:
              0.25542772 = queryWeight, product of:
                5.080822 = idf(docFreq=746, maxDocs=44218)
                0.05027291 = queryNorm
              0.38891944 = fieldWeight in 1188, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.080822 = idf(docFreq=746, maxDocs=44218)
                0.03125 = fieldNorm(doc=1188)
          0.027245143 = weight(_text_:22 in 1188) [ClassicSimilarity], result of:
            0.027245143 = score(doc=1188,freq=2.0), product of:
              0.17604718 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05027291 = queryNorm
              0.15476047 = fieldWeight in 1188, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1188)
      0.6666667 = coord(2/3)
    
    Abstract
    Surfing the World Wide Web used to be cool, dude, real cool. But things have gotten hot - so hot that finding something useful an the Web is no longer cool. It is suffocating Web searchers in the smoke and debris of mountain-sized lists of hits, decisions about which search engines they should use, whether they will get lost in the dizzying maze of a subject directory, use the right syntax for the search engine at hand, enter keywords that are likely to retrieve hits an the topics they have in mind, or enlist a browser that has sufficient functionality to display the most promising hits. When it comes to Web searching, in a few short years we have gone from the cool image of surfing the Web into the frying pan of searching the Web. We can turn down the heat by rethinking what Web searchers are doing and introduce some order into the chaos. Web search strategies that are tool-based-oriented to specific Web searching tools such as search en gines, subject directories, and meta search engines-have been widely promoted, and these strategies are just not working. It is time to dissect what Web searching tools expect from searchers and adjust our search strategies to these new tools. This discussion offers Web searchers help in the form of search strategies that are based an strategies that librarians have been using for a long time to search commercial information retrieval systems like Dialog, NEXIS, Wilsonline, FirstSearch, and Data-Star.
    Content
    "Web searching is different from searching commercial IR systems. We can learn from search strategies recommended for searching IR systems, but most won't be effective for Web searching. Web searchers need strate gies that let search engines do the job they were designed to do. This article presents six new Web searching strategies that do just that."
    Date
    22. 9.1997 19:16:05
  3. Garnsey, M.R.: What distance learners should know about information retrieval on the World Wide Web (2002) 0.12
    0.12413964 = product of:
      0.18620946 = sum of:
        0.09002314 = weight(_text_:search in 1626) [ClassicSimilarity], result of:
          0.09002314 = score(doc=1626,freq=10.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.51520574 = fieldWeight in 1626, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=1626)
        0.09618632 = product of:
          0.19237264 = sum of:
            0.19237264 = weight(_text_:engines in 1626) [ClassicSimilarity], result of:
              0.19237264 = score(doc=1626,freq=10.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.75313926 = fieldWeight in 1626, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1626)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Internet can be a valuable tool allowing distance learners to access information not available locally. Search engines are the most common means of locating relevant information an the Internet, but to use them efficiently students should be taught the basics of searching and how to evaluate the results. This article briefly reviews how Search engines work, studies comparing Search engines, and criteria useful in evaluating the quality of returned Web pages. Research indicates there are statistical differences in the precision of Search engines, with AltaVista ranking high in several studies. When evaluating the quality of Web pages, standard criteria used in evaluating print resources is appropriate, as well as additional criteria which relate to the Web site itself. Giving distance learners training in how to use Search engines and how to evaluate the results will allow them to access relevant information efficiently while ensuring that it is of adequate quality.
  4. Kim, K.-S.; Allen, B.: Cognitive and task influences on Web searching behavior (2002) 0.12
    0.11630317 = product of:
      0.17445475 = sum of:
        0.12426962 = weight(_text_:search in 199) [ClassicSimilarity], result of:
          0.12426962 = score(doc=199,freq=14.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.71119964 = fieldWeight in 199, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=199)
        0.05018513 = product of:
          0.10037026 = sum of:
            0.10037026 = weight(_text_:engines in 199) [ClassicSimilarity], result of:
              0.10037026 = score(doc=199,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39294976 = fieldWeight in 199, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=199)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Users' individual differences and tasks are important factors that influence the use of information systems. Two independent investigations were conducted to study the impact of differences in users' cognition and search tasks on Web search activities and outcomes. Strong task effects were found on search activities and outcomes, whereas interactions between cognitive and task variables were found on search activities only. These results imply that the flexibility of the Web and Web search engines allows different users to complete different search tasks successfully. However, the search techniques used and the efficiency of the searches appear to depend on how well the individual searcher fits with the specific task
  5. Butler, D.: Souped-up search engines (2000) 0.11
    0.11105718 = product of:
      0.16658576 = sum of:
        0.09489272 = weight(_text_:search in 2139) [ClassicSimilarity], result of:
          0.09489272 = score(doc=2139,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.54307455 = fieldWeight in 2139, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.078125 = fieldNorm(doc=2139)
        0.07169304 = product of:
          0.14338608 = sum of:
            0.14338608 = weight(_text_:engines in 2139) [ClassicSimilarity], result of:
              0.14338608 = score(doc=2139,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.5613568 = fieldWeight in 2139, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2139)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    For scientists, finding the information they want on the WWW is a hit-and-miss affair. But, as Declan Butler reports, more sophisticated and specialized search technlogies are promising to change all that
  6. Chau, M.; Fang, X.; Rittman, C.C.: Web searching in Chinese : a study of a search engine in Hong Kong (2007) 0.11
    0.10797747 = product of:
      0.1619662 = sum of:
        0.11127157 = weight(_text_:search in 336) [ClassicSimilarity], result of:
          0.11127157 = score(doc=336,freq=22.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.6368113 = fieldWeight in 336, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=336)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 336) [ClassicSimilarity], result of:
              0.10138928 = score(doc=336,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 336, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=336)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The number of non-English resources has been increasing rapidly on the Web. Although many studies have been conducted on the query logs in search engines that are primarily English-based (e.g., Excite and AltaVista), only a few of them have studied the information-seeking behavior on the Web in non-English languages. In this article, we report the analysis of the search-query logs of a search engine that focused on Chinese. Three months of search-query logs of Timway, a search engine based in Hong Kong, were collected and analyzed. Metrics on sessions, queries, search topics, and character usage are reported. N-gram analysis also has been applied to perform character-based analysis. Our analysis suggests that some characteristics identified in the search log, such as search topics and the mean number of queries per sessions, are similar to those in English search engines; however, other characteristics, such as the use of operators in query formulation, are significantly different. The analysis also shows that only a very small number of unique Chinese characters are used in search queries. We believe the findings from this study have provided some insights into further research in non-English Web searching.
  7. Lucas, W.; Topi, H.: Form and function : the impact of query term and operator usage on Web search results (2002) 0.10
    0.10452528 = product of:
      0.15678792 = sum of:
        0.10609328 = weight(_text_:search in 198) [ClassicSimilarity], result of:
          0.10609328 = score(doc=198,freq=20.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.60717577 = fieldWeight in 198, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=198)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 198) [ClassicSimilarity], result of:
              0.10138928 = score(doc=198,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 198, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=198)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Conventional wisdom holds that queries to information retrieval systems will yield more relevant results if they contain multiple topic-related terms and use Boolean and phrase operators to enhance interpretation. Although studies have shown that the users of Web-based search engines typically enter short, term-based queries and rarely use search operators, little information exists concerning the effects of term and operator usage on the relevancy of search results. In this study, search engine users formulated queries on eight search topics. Each query was submitted to the user-specified search engine, and relevancy ratings for the retrieved pages were assigned. Expert-formulated queries were also submitted and provided a basis for comparing relevancy ratings across search engines. Data analysis based on our research model of the term and operator factors affecting relevancy was then conducted. The results show that the difference in the number of terms between expert and nonexpert searches, the percentage of matching terms between those searches, and the erroneous use of nonsupported operators in nonexpert searches explain most of the variation in the relevancy of search results. These findings highlight the need for designing search engine interfaces that provide greater support in the areas of term selection and operator usage
  8. Andricik, M.: Metasearch engine for Austrian research information (2002) 0.10
    0.10155071 = product of:
      0.15232606 = sum of:
        0.08135357 = weight(_text_:search in 3600) [ClassicSimilarity], result of:
          0.08135357 = score(doc=3600,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.46558946 = fieldWeight in 3600, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3600)
        0.070972495 = product of:
          0.14194499 = sum of:
            0.14194499 = weight(_text_:engines in 3600) [ClassicSimilarity], result of:
              0.14194499 = score(doc=3600,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.5557149 = fieldWeight in 3600, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3600)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Majority of Austrian research relevant information available an the Web these days can be indexed by web full-text search engines. But there are still several sources of valuable information, which cannot be indexed directly. One of effective ways of getting this information to end-users is using metasearch technique. For better understanding it is important to say that metasearch engine does not use its own index. It collects search results provided by other search engines, and builds a common hit list for end users. Our prototype provides access to five sources of research relevant information available an the Austrian web.
  9. Sherman, C.; Price, G.: ¬The invisible Web : uncovering sources search engines can't see (2004) 0.10
    0.09615815 = product of:
      0.14423722 = sum of:
        0.06973162 = weight(_text_:search in 20) [ClassicSimilarity], result of:
          0.06973162 = score(doc=20,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.39907667 = fieldWeight in 20, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=20)
        0.074505605 = product of:
          0.14901121 = sum of:
            0.14901121 = weight(_text_:engines in 20) [ClassicSimilarity], result of:
              0.14901121 = score(doc=20,freq=6.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.58337915 = fieldWeight in 20, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.046875 = fieldNorm(doc=20)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The paradox of the Invisible Web is that it's easy to understand why it exists, but it's very hard to actually define in concrete, specific terms. In a nutshell, the Invisible Web consists of content that's been excluded from general-purpose search engines and Web directories such as Lycos and LookSmart-and yes, even Google. There's nothing inherently "invisible" about this content. But since this content is not easily located with the information-seeking tools used by most Web users, it's effectively invisible because it's so difficult to find unless you know exactly where to look. In this paper, we define the Invisible Web and delve into the reasons search engines can't "see" its content. We also discuss the four different "types" of invisibility, ranging from the "opaque" Web which is relatively accessible to the searcher, to the truly invisible Web, which requires specialized finding aids to access effectively.
  10. Internet searching and indexing : the subject approach (2000) 0.09
    0.088845745 = product of:
      0.13326861 = sum of:
        0.075914174 = weight(_text_:search in 1468) [ClassicSimilarity], result of:
          0.075914174 = score(doc=1468,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.43445963 = fieldWeight in 1468, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=1468)
        0.057354435 = product of:
          0.11470887 = sum of:
            0.11470887 = weight(_text_:engines in 1468) [ClassicSimilarity], result of:
              0.11470887 = score(doc=1468,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.44908544 = fieldWeight in 1468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1468)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This comprehensive volume offers usable information for people at all levels of Internet savvy. It can teach librarians, students, and patrons how to search the Internet more systematically. It also helps information professionals design more efficient, effective search engines and Web pages.
  11. Pu, H.-T.; Chuang, S.-L.; Yang, C.: Subject categorization of query terms for exploring Web users' search interests (2002) 0.09
    0.0871595 = product of:
      0.13073924 = sum of:
        0.09489272 = weight(_text_:search in 587) [ClassicSimilarity], result of:
          0.09489272 = score(doc=587,freq=16.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.54307455 = fieldWeight in 587, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=587)
        0.03584652 = product of:
          0.07169304 = sum of:
            0.07169304 = weight(_text_:engines in 587) [ClassicSimilarity], result of:
              0.07169304 = score(doc=587,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2806784 = fieldWeight in 587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=587)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Subject content analysis of Web query terms is essential to understand Web searching interests. Such analysis includes exploring search topics and observing changes in their frequency distributions with time. To provide a basis for in-depth analysis of users' search interests on a larger scale, this article presents a query categorization approach to automatically classifying Web query terms into broad subject categories. Because a query is short in length and simple in structure, its intended subject(s) of search is difficult to judge. Our approach, therefore, combines the search processes of real-world search engines to obtain highly ranked Web documents based on each unknown query term. These documents are used to extract cooccurring terms and to create a feature set. An effective ranking function has also been developed to find the most appropriate categories. Three search engine logs in Taiwan were collected and tested. They contained over 5 million queries from different periods of time. The achieved performance is quite encouraging compared with that of human categorization. The experimental results demonstrate that the approach is efficient in dealing with large numbers of queries and adaptable to the dynamic Web environment. Through good integration of human and machine efforts, the frequency distributions of subject categories in response to changes in users' search interests can be systematically observed in real time. The approach has also shown potential for use in various information retrieval applications, and provides a basis for further Web searching studies.
  12. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.08
    0.08380928 = product of:
      0.12571391 = sum of:
        0.07501928 = weight(_text_:search in 3091) [ClassicSimilarity], result of:
          0.07501928 = score(doc=3091,freq=10.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.4293381 = fieldWeight in 3091, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 3091) [ClassicSimilarity], result of:
              0.10138928 = score(doc=3091,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 3091, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3091)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Because of the increasing presence of scientific publications an the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research an techniques and methods for retrieval of scientific Web publications is called for. In this article, we report an the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article were generated based an specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AlITheWeb, and AItaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both AItaVista and AlITheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed.
  13. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.08
    0.0801318 = product of:
      0.12019769 = sum of:
        0.058109686 = weight(_text_:search in 3752) [ClassicSimilarity], result of:
          0.058109686 = score(doc=3752,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.33256388 = fieldWeight in 3752, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3752)
        0.062088005 = product of:
          0.12417601 = sum of:
            0.12417601 = weight(_text_:engines in 3752) [ClassicSimilarity], result of:
              0.12417601 = score(doc=3752,freq=6.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.4861493 = fieldWeight in 3752, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3752)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scien-tific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estima-tion based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic In-visible Web.
  14. Thelwall, M.: Results from a web impact factor crawler (2001) 0.08
    0.0801318 = product of:
      0.12019769 = sum of:
        0.058109686 = weight(_text_:search in 4490) [ClassicSimilarity], result of:
          0.058109686 = score(doc=4490,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.33256388 = fieldWeight in 4490, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4490)
        0.062088005 = product of:
          0.12417601 = sum of:
            0.12417601 = weight(_text_:engines in 4490) [ClassicSimilarity], result of:
              0.12417601 = score(doc=4490,freq=6.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.4861493 = fieldWeight in 4490, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4490)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Web impact factors, the proposed web equivalent of impact factors for journals, can be calculated by using search engines. It has been found that the results are problematic because of the variable coverage of search engines as well as their ability to give significantly different results over short periods of time. The fundamental problem is that although some search engines provide a functionality that is capable of being used for impact calculations, this is not their primary task and therefore they do not give guarantees as to performance in this respect. In this paper, a bespoke web crawler designed specifically for the calculation of reliable WIFs is presented. This crawler was used to calculate WIFs for a number of UK universities, and the results of these calculations are discussed. The principal findings were that with certain restrictions, WIFs can be calculated reliably, but do not correlate with accepted research rankings owing to the variety of material hosted on university servers. Changes to the calculations to improve the fit of the results to research rankings are proposed, but there are still inherent problems undermining the reliability of the calculation. These problems still apply if the WIF scores are taken on their own as indicators of the general impact of any area of the Internet, but with care would not apply to online journals.
  15. Espadas, J.; Calero, C.; Piattini, M.: Web site visibility evaluation (2008) 0.08
    0.0801318 = product of:
      0.12019769 = sum of:
        0.058109686 = weight(_text_:search in 2353) [ClassicSimilarity], result of:
          0.058109686 = score(doc=2353,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.33256388 = fieldWeight in 2353, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2353)
        0.062088005 = product of:
          0.12417601 = sum of:
            0.12417601 = weight(_text_:engines in 2353) [ClassicSimilarity], result of:
              0.12417601 = score(doc=2353,freq=6.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.4861493 = fieldWeight in 2353, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2353)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In recent years, the Internet has experienced a boom as an information source. The use of search engines is the most common way of finding this information. This means that less visible contents (for search engines) are increasingly difficult or even almost impossible to find. Thus, Web users are forced to accept alternative services or contents only because they are visible and offered to users by search engines. If a company's Web site is not visible, that company is losing clients. Therefore, it is fundamental to assure that one's Web site will be indexed and, consequently, visible to as many Web users as possible. To quantitatively evaluate the visibility of a Web site, this article introduces a method that Web administrators may use. The method consists of four activities and several tasks. Most of the tasks are accompanied by a set of defined measures that can help the Web administrator determine where the Web design is failing (from the positioning point of view). Some tools that can be used for the determination of the measure values also are referenced in the description of the method. The method is furthermore accompanied by examples to help in understanding how to apply it.
  16. Sherman, C.; Price, G.: ¬The invisible Web : uncovering information sources search engines can't see (2001) 0.08
    0.07852928 = product of:
      0.11779392 = sum of:
        0.06709928 = weight(_text_:search in 62) [ClassicSimilarity], result of:
          0.06709928 = score(doc=62,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.3840117 = fieldWeight in 62, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=62)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 62) [ClassicSimilarity], result of:
              0.10138928 = score(doc=62,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 62, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=62)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Enormous expanses of the Internet are unreachable with standard Web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible Web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, information in databases is generally inaccessible to the software spiders and crawlers that compile search engine indexes. As Web technology improves, more and more information is being stored in databases that feed into dynamically generated Web pages. The tips provided in this resource will ensure that those databases are exposed and Net-based research will be conducted in the most thorough and effective manner. Discusses the use of online information resources and problems caused by dynamically generated Web pages, paying special attention to information mapping, assessing the validity of information, and the future of Web searching.
  17. Chau, M.; Shiu, B.; Chan, M.; Chen, H.: Redips: backlink search and analysis on the Web for business intelligence analysis (2007) 0.08
    0.07852928 = product of:
      0.11779392 = sum of:
        0.06709928 = weight(_text_:search in 142) [ClassicSimilarity], result of:
          0.06709928 = score(doc=142,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.3840117 = fieldWeight in 142, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=142)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 142) [ClassicSimilarity], result of:
              0.10138928 = score(doc=142,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 142, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=142)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The World Wide Web presents significant opportunities for business intelligence analysis as it can provide information about a company's external environment and its stakeholders. Traditional business intelligence analysis on the Web has focused on simple keyword searching. Recently, it has been suggested that the incoming links, or backlinks, of a company's Web site (i.e., other Web pages that have a hyperlink pointing to the company of Interest) can provide important insights about the company's "online communities." Although analysis of these communities can provide useful signals for a company and information about its stakeholder groups, the manual analysis process can be very time-consuming for business analysts and consultants. In this article, we present a tool called Redips that automatically integrates backlink meta-searching and text-mining techniques to facilitate users in performing such business intelligence analysis on the Web. The architectural design and implementation of the tool are presented in the article. To evaluate the effectiveness, efficiency, and user satisfaction of Redips, an experiment was conducted to compare the tool with two popular business Intelligence analysis methods-using backlink search engines and manual browsing. The experiment results showed that Redips was statistically more effective than both benchmark methods (in terms of Recall and F-measure) but required more time in search tasks. In terms of user satisfaction, Redips scored statistically higher than backlink search engines in all five measures used, and also statistically higher than manual browsing in three measures.
  18. Sperber, W.; Dalitz, W.: Portale, Search Engines and Math-Net (2000) 0.08
    0.0785128 = product of:
      0.1177692 = sum of:
        0.056935627 = weight(_text_:search in 5237) [ClassicSimilarity], result of:
          0.056935627 = score(doc=5237,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.3258447 = fieldWeight in 5237, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=5237)
        0.060833566 = product of:
          0.12166713 = sum of:
            0.12166713 = weight(_text_:engines in 5237) [ClassicSimilarity], result of:
              0.12166713 = score(doc=5237,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.47632706 = fieldWeight in 5237, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5237)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In Math-Net stellen Personen und Institutionen ihre für die Mathematik relevanten Informationen auf eigenen Web-Servern bereit, doch sollen die Informationen in einheitlicher Weise erschlossen werden. Dazu gibt es sowohl für Server als auch für die Dokumente Empfehlungen für deren Strukturierung. Die lokalen Informationen werden durch automatische Verfahren gesammelt, ausgewertet und indexiert. Diese Indexe sind die Basis für die Math-Net Dienste. Das sind Search Engines und Portale, die einen qualifizierten und effizienten Zugang zu den Informationen im Math-Net bieten. Die Dienste decken im Gegensatz zu den universellen Suchmaschinen nur den für die Mathematik relevanten Teil des Web ab. Math-Net ist auch ein Informations- und Kornmunikationssystem sowie ein Publikationsmedium für die Mathematik. Die Entwicklung des Math-Net wird von dem breiten Konsens der Mathematiker getragen, den Zugang zu der für die Mathematik relevanten Information zu erleichtern und zu verbessern
  19. Warnick, W.L.; Leberman, A.; Scott, R.L.; Spence, K.J.; Johnsom, L.A.; Allen, V.S.: Searching the deep Web : directed query engine applications at the Department of Energy (2001) 0.08
    0.07651012 = product of:
      0.114765175 = sum of:
        0.04025957 = weight(_text_:search in 1215) [ClassicSimilarity], result of:
          0.04025957 = score(doc=1215,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.230407 = fieldWeight in 1215, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=1215)
        0.074505605 = product of:
          0.14901121 = sum of:
            0.14901121 = weight(_text_:engines in 1215) [ClassicSimilarity], result of:
              0.14901121 = score(doc=1215,freq=6.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.58337915 = fieldWeight in 1215, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1215)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Directed Query Engines, an emerging class of search engine specifically designed to access distributed resources on the deep web, offer the opportunity to create inexpensive digital libraries. Already, one such engine, Distributed Explorer, has been used to select and assemble high quality information resources and incorporate them into publicly available systems for the physical sciences. By nesting Directed Query Engines so that one query launches several other engines in a cascading fashion, enormous virtual collections may soon be assembled to form a comprehensive information infrastructure for the physical sciences. Once a Directed Query Engine has been configured for a set of information resources, distributed alerts tools can provide patrons with personalized, profile-based notices of recent additions to any of the selected resources. Due to the potentially enormous size and scope of Directed Query Engine applications, consideration must be given to issues surrounding the representation of large quantities of information from multiple, heterogeneous sources.
  20. Müller, T.: Wort-Schnüffler : Kochrezepte kostenlos: in den USA erlaubt Amazon online das Recherchieren in Büchern (2004) 0.08
    0.07646762 = product of:
      0.11470142 = sum of:
        0.040676784 = weight(_text_:search in 4826) [ClassicSimilarity], result of:
          0.040676784 = score(doc=4826,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.23279473 = fieldWeight in 4826, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4826)
        0.07402463 = sum of:
          0.05018513 = weight(_text_:engines in 4826) [ClassicSimilarity], result of:
            0.05018513 = score(doc=4826,freq=2.0), product of:
              0.25542772 = queryWeight, product of:
                5.080822 = idf(docFreq=746, maxDocs=44218)
                0.05027291 = queryNorm
              0.19647488 = fieldWeight in 4826, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.080822 = idf(docFreq=746, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4826)
          0.0238395 = weight(_text_:22 in 4826) [ClassicSimilarity], result of:
            0.0238395 = score(doc=4826,freq=2.0), product of:
              0.17604718 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05027291 = queryNorm
              0.1354154 = fieldWeight in 4826, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4826)
      0.6666667 = coord(2/3)
    
    Content
    "Hobbyköche sind begeistert, Teenager, die sonst nur am Computerbildschirm kleben, interessieren sich plötzlich für Bücher, und Autoren werden nervös: Mit einer neuartigen Internet-Suchmaschine sorgt der Onlinebuchhändler Amazon.com in den USA für Furore und mischt den umkämpften Markt der "Search Engines" auf. Die im Oktober eingeführte Suchmaschine "Search Inside the Bock" ("Suche innerhalb des Buches!", englische Informationen unter http://www.amazon.com/exec/ obidos/tg/browse/-/10197041/002-3913532 0581613) stößt in eine neue Dimension vor. Während die meisten Suchmaschinen bisher bei einem gesuchten Titel Halt machten, blättert Amazons Suchmaschine das Buch förmlich auf und erlaubt das digitale Durchforsten ganzer Werke - zumindest von denen, die von Amazon eingescannt wurden - und das sind immerhin schon 120 000 Bücher mit 33 Millionen Seiten. Ist als Suchbegriff etwa" Oliver Twist", eingegeben, tauchen die Seiten auf, auf denen der Held des Romans von Charles Dickens erscheint. Von diesen Seiten aus können mit einem Passwort registrierte Kunden dann sogar weiter blättern und so bis zu 20 Prozent eines Buchs am Bildschirm durchschmökern. Ein neuer Kaufanreiz? Ob und wann die Suchmaschine auf dem deutschen Markt eingeführt wird, lässt Amazon offen. "Darüber spekulieren wir nicht", sagte eine Sprecherin des Unternehmens in Seattle. Amazon erhofft sich von dem neuen Service vor allem einen Kaufanreiz. Erste Zahlen scheinen dem Unternehmen von Jeff Bezos Recht zu geben. Bücher, die von der Suchmaschine erfasst wurden, verkauften sich zumindest in den ersten Tagen nach der Markteinführung deutlich besser als die anderen Werke. Bisher hat Amazon Verträge mit 190 Verlagen getroffen und deren Werke elektronisch abrufbar gemacht. Nur wenige Unternehmen sperrten sich aus Sorge vor Verkaufseinbußen oder einer möglichen Verletzung von Urheberrechten gegen das Einscannen ihrer Bücher. 15 Autoren forderten den Online-Riesen allerdings auf, ihre Bücher von der Suchmaschine auszunehmen. Besondere Sorge bereitet Amazons Erfindung einigen Sachbuchverlagen. So nutzen in den USA unter anderem Hobbyköche die neue Suchmaschine mit Begeisterung. Denn sie können den oft teuren Kochbüchern ihre Lieblingsrezepte entnehmen und dann auf den Kauf verzichten. "Kochbücher werden oft für ein bestimmtes Rezept gekauft", erklärte Nach Waxman, der Besitzer eines Kochbuchladens in New York der "Washington Post". Wenn sie das Rezept aber schon haben, dann könnten sie leicht sagen, "ich habe alles, was ich brauche", stellt Waxman besorgt fest. Auch für Lexika und andere teure Sachbücher, die etwa von Schülern oder College-Studenten für ihre Arbeiten durchsucht werden, könnte dies zutreffen.
    Inzwischen hat der Buchversender einige Vorkehrungen getroffen. Unter anderem wurde eine elektronische Sperre errichtet, so dass die Seiten inzwischen nur noch von geübten Computernutzern kopiert oder ausdruckt werden können. Mit "Search Inside the Bock" hat der OnlineRiese seinen ersten Schritt in den heiß umkämpften Markt der Suchmaschinen unternom- men. Schon plant Amazön nach amerikanischen Medienberichten eine weitere Suchmaschine, um Kunden das elektronische Einkaufen zu erleichtern.. Die unter den Codenamen A9 bekannte Suchmaschine soll unter anderem die Preise von Produkten vergleichen und das günstigste Angebot ermitteln. Damit stößt Amazön in einen Markt vor, der bisher in den USA von dem Onlineportal Yahoo oder der Super-Suchmaschine Google beherrscht wurde. Google hat bereits zum Gegenangriff angesetzt. Nach Informationen des Fachmagazins "Publishers Weekly". verhandelt das Unternehmen bereits mit Verlagen, um ebenfalls in die neue Dimension der Buchinhalte vorzudringen."
    Date
    3. 5.1997 8:44:22

Languages

  • e 153
  • d 122
  • el 1
  • f 1
  • More… Less…

Types

  • a 233
  • m 33
  • s 12
  • el 11
  • b 1
  • x 1
  • More… Less…

Subjects

Classifications