Search (14 results, page 1 of 1)

  • × author_ss:"Qin, J."
  1. Chen, M.; Liu, X.; Qin, J.: Semantic relation extraction from socially-generated tags : a methodology for metadata generation (2008) 0.08
    0.07649208 = product of:
      0.1274868 = sum of:
        0.023397226 = weight(_text_:retrieval in 2648) [ClassicSimilarity], result of:
          0.023397226 = score(doc=2648,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.16710453 = fieldWeight in 2648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2648)
        0.0884113 = weight(_text_:semantic in 2648) [ClassicSimilarity], result of:
          0.0884113 = score(doc=2648,freq=8.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.45938298 = fieldWeight in 2648, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2648)
        0.015678266 = product of:
          0.031356532 = sum of:
            0.031356532 = weight(_text_:22 in 2648) [ClassicSimilarity], result of:
              0.031356532 = score(doc=2648,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.19345059 = fieldWeight in 2648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2648)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The growing predominance of social semantics in the form of tagging presents the metadata community with both opportunities and challenges as for leveraging this new form of information content representation and for retrieval. One key challenge is the absence of contextual information associated with these tags. This paper presents an experiment working with Flickr tags as an example of utilizing social semantics sources for enriching subject metadata. The procedure included four steps: 1) Collecting a sample of Flickr tags, 2) Calculating cooccurrences between tags through mutual information, 3) Tracing contextual information of tag pairs via Google search results, 4) Applying natural language processing and machine learning techniques to extract semantic relations between tags. The experiment helped us to build a context sentence collection from the Google search results, which was then processed by natural language processing and machine learning algorithms. This new approach achieved a reasonably good rate of accuracy in assigning semantic relations to tag pairs. This paper also explores the implications of this approach for using social semantics to enrich subject metadata.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Qin, J.; Chen, J.: ¬A multi-layered, multi-dimensional representation of digital educational resources (2003) 0.04
    0.041238464 = product of:
      0.10309616 = sum of:
        0.028076671 = weight(_text_:retrieval in 3818) [ClassicSimilarity], result of:
          0.028076671 = score(doc=3818,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.20052543 = fieldWeight in 3818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3818)
        0.075019486 = weight(_text_:semantic in 3818) [ClassicSimilarity], result of:
          0.075019486 = score(doc=3818,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.38979942 = fieldWeight in 3818, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=3818)
      0.4 = coord(2/5)
    
    Abstract
    Semantic mapping between controlled vocabulary and keywords is the first step towards knowledge-based subject access. This study reports the preliminary result of a semantic mapping experiment for the Gateway to Educational Materials (GEM). A total of 3,555 keywords were mapped with 322 concept names in the GEM controlled vocabulary. The preliminary test to 10,000 metadata records presented widely varied sets of results between the mapped and non-mapped data. The paper discussed linguistic and technical problems encountered in the mapping process and raised issues in the representation technologies and methods, which will lead to future study of knowledge-based access to networked information resources.
    Source
    Subject retrieval in a networked environment: Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC. Ed.: I.C. McIlwaine
  3. Qin, J.; Zhou, Y.; Chau, M.; Chen, H.: Multilingual Web retrieval : an experiment in English-Chinese business intelligence (2006) 0.03
    0.032059558 = product of:
      0.08014889 = sum of:
        0.04679445 = weight(_text_:retrieval in 5054) [ClassicSimilarity], result of:
          0.04679445 = score(doc=5054,freq=8.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.33420905 = fieldWeight in 5054, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5054)
        0.03335444 = product of:
          0.06670888 = sum of:
            0.06670888 = weight(_text_:web in 5054) [ClassicSimilarity], result of:
              0.06670888 = score(doc=5054,freq=12.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.4416067 = fieldWeight in 5054, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5054)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    As increasing numbers of non-English resources have become available on the Web, the interesting and important issue of how Web users can retrieve documents in different languages has arisen. Cross-language information retrieval (CLIP), the study of retrieving information in one language by queries expressed in another language, is a promising approach to the problem. Cross-language information retrieval has attracted much attention in recent years. Most research systems have achieved satisfactory performance on standard Text REtrieval Conference (TREC) collections such as news articles, but CLIR techniques have not been widely studied and evaluated for applications such as Web portals. In this article, the authors present their research in developing and evaluating a multilingual English-Chinese Web portal that incorporates various CLIP techniques for use in the business domain. A dictionary-based approach was adopted and combines phrasal translation, co-occurrence analysis, and pre- and posttranslation query expansion. The portal was evaluated by domain experts, using a set of queries in both English and Chinese. The experimental results showed that co-occurrence-based phrasal translation achieved a 74.6% improvement in precision over simple word-byword translation. When used together, pre- and posttranslation query expansion improved the performance slightly, achieving a 78.0% improvement over the baseline word-by-word translation approach. In general, applying CLIR techniques in Web applications shows promise.
  4. Qin, J.: Discovering semantic patterns in bibliographically coupled documents (1999) 0.02
    0.024755163 = product of:
      0.12377582 = sum of:
        0.12377582 = weight(_text_:semantic in 6279) [ClassicSimilarity], result of:
          0.12377582 = score(doc=6279,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.64313614 = fieldWeight in 6279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.109375 = fieldNorm(doc=6279)
      0.2 = coord(1/5)
    
  5. Qin, J.: Semantic patterns in bibliographically coupled documents (2002) 0.02
    0.018713146 = product of:
      0.09356573 = sum of:
        0.09356573 = weight(_text_:semantic in 4266) [ClassicSimilarity], result of:
          0.09356573 = score(doc=4266,freq=14.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.4861653 = fieldWeight in 4266, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=4266)
      0.2 = coord(1/5)
    
    Abstract
    Different research fields have different definitions for semantic patterns. For knowledge discovery and representation, semantic patterns represent the distribution of occurrences of words in documents and/or citations. In the broadest sense, the term semantic patterns may also refer to the distribution of occurrences of subjects or topics as reflected in documents. The semantic pattern in a set of documents or a group of topics therefore implies quantitative indicators that describe the subject characteristics of the documents being examined. These characteristics are often described by frequencies of keyword occurrences, number of co-occurred keywords, occurrences of coword, and number of cocitations. There are many ways to analyze and derive semantic patterns in documents and citations. A typical example is text mining in full-text documents, a research topic that studies how to extract useful associations and patterns through clustering, categorizing, and summarizing words in texts. One unique way in library and information science is to discover semantic patterns through bibliographically coupled citations. The history of bibliographical coupling goes back in the early 1960s when Kassler investigated associations among technical reports and technical information flow patterns. A number of definitions may facilitate our understanding of bibliographic coupling: (1) bibliographic coupling determines meaningful relations between papers by a study of each paper's bibliography; (2) a unit of coupling is the functional bond between papers when they share a single reference item; (3) coupling strength shows the order of combinations of units of coupling into a graded scale between groups of papers; and (4) a coupling criterion is the way by which the coupling units are combined between two or more papers. Kessler's classic paper an bibliographic coupling between scientific papers proposes the following two graded criteria: Criterion A: A number of papers constitute a related group GA if each member of the group has at least one coupling unit to a given test paper P0. The coupling strength between P0 and any member of GA is measured by the number of coupling units n between them. G(subA)(supn) is that portion of GA that is linked to P0 through n coupling units; Criterion B: A number of papers constitute a related group GB if each member of the group has at least one coupling unit to every other member of the group.
  6. Qin, J.: Semantic similarities between a keyword database and a controlled vocabulary database : an investigation in the antibiotic resistance literature (2000) 0.02
    0.01768226 = product of:
      0.0884113 = sum of:
        0.0884113 = weight(_text_:semantic in 4386) [ClassicSimilarity], result of:
          0.0884113 = score(doc=4386,freq=8.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.45938298 = fieldWeight in 4386, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4386)
      0.2 = coord(1/5)
    
    Abstract
    The 'KeyWords Plus' in the Science Citation Index database represents an approach to combining citation and semantic indexing in describing the document content. This paper explores the similariites or dissimilarities between citation-semantic and analytic indexing. The dataset consisted of over 400 matching records in the SCI and MEDLINE databases on antibiotic resistance in pneumonia. The degree of similarity in indexing terms was found to vary on a scale from completely different to completely identical with various levels in between. The within-document similarity in the 2 databases was measured by a variation on the Jaccard coefficient - the Inclusion Index. The average inclusion coefficient was 0,4134 for SCI and 0,3371 for Medline. The 20 terms occuring most frequently in each database were identified. The 2 groups of terms shared the same terms that consist of the 'intellectual base' for the subject. conceptual similarity was analyzed through scatterplots of matching and nonmatching terms vs. partially identical and broader/narrower terms. The study also found that both databases differed in assigning terms in various semantic categories. Implications of this research and further studies are suggested
  7. Qin, J.; Creticos, P.; Hsiao, W.Y.: Adaptive modeling of workforce domain knowledge (2006) 0.02
    0.015003897 = product of:
      0.075019486 = sum of:
        0.075019486 = weight(_text_:semantic in 2519) [ClassicSimilarity], result of:
          0.075019486 = score(doc=2519,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.38979942 = fieldWeight in 2519, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=2519)
      0.2 = coord(1/5)
    
    Abstract
    Workforce development is a multidisciplinary domain in which policy, laws and regulations, social services, training and education, and information technology and systems are heavily involved. It is essential to have a semantic base accepted by the workforce development community for knowledge sharing and exchange. This paper describes how such a semantic base-the Workforce Open Knowledge Exchange (WOKE) Ontology-was built by using the adaptive modeling approach. The focus of this paper is to address questions such as how ontology designers should extract and model concepts obtained from different sources and what methodologies are useful along the steps of ontology development. The paper proposes a methodology framework "adaptive modeling" and explains the methodology through examples and some lessons learned from the process of developing the WOKE ontology.
  8. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.01
    0.012504158 = product of:
      0.031260394 = sum of:
        0.01871778 = weight(_text_:retrieval in 2763) [ClassicSimilarity], result of:
          0.01871778 = score(doc=2763,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.13368362 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.012542613 = product of:
          0.025085226 = sum of:
            0.025085226 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.025085226 = score(doc=2763,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.15476047 = fieldWeight in 2763, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2763)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
  9. Chen, H.; Chung, W.; Qin, J.; Reid, E.; Sageman, M.; Weimann, G.: Uncovering the dark Web : a case study of Jihad on the Web (2008) 0.01
    0.009804162 = product of:
      0.049020812 = sum of:
        0.049020812 = product of:
          0.098041624 = sum of:
            0.098041624 = weight(_text_:web in 1880) [ClassicSimilarity], result of:
              0.098041624 = score(doc=1880,freq=18.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.64902663 = fieldWeight in 1880, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1880)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    While the Web has become a worldwide platform for communication, terrorists share their ideology and communicate with members on the Dark Web - the reverse side of the Web used by terrorists. Currently, the problems of information overload and difficulty to obtain a comprehensive picture of terrorist activities hinder effective and efficient analysis of terrorist information on the Web. To improve understanding of terrorist activities, we have developed a novel methodology for collecting and analyzing Dark Web information. The methodology incorporates information collection, analysis, and visualization techniques, and exploits various Web information sources. We applied it to collecting and analyzing information of 39 Jihad Web sites and developed visualization of their site contents, relationships, and activity levels. An expert evaluation showed that the methodology is very useful and promising, having a high potential to assist in investigation and understanding of terrorist activities by producing results that could potentially help guide both policymaking and intelligence research.
  10. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.01
    0.007525568 = product of:
      0.03762784 = sum of:
        0.03762784 = product of:
          0.07525568 = sum of:
            0.07525568 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.07525568 = score(doc=3895,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    24. 8.2005 19:20:22
  11. Qin, J.; Wesley, K.: Web indexing with meta fields : a survey of Web objects in polymer chemistry (1998) 0.01
    0.0073075923 = product of:
      0.03653796 = sum of:
        0.03653796 = product of:
          0.07307592 = sum of:
            0.07307592 = weight(_text_:web in 3589) [ClassicSimilarity], result of:
              0.07307592 = score(doc=3589,freq=10.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.48375595 = fieldWeight in 3589, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3589)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Reports results of a study of 4 WWW search engines: AltaVista; Lycos; Excite and WebCrawler to collect data on Web objects on polymer chemistry. 1.037 Web objects were examined for data in 4 categories: document information; use of meta fields; use of images and use of chemical names. Issues raised included: whether to provide metadata elements for parts of entities or whole entities only, the use of metasyntax, problems in representation of special types of objects, and whether links should be considered when encoding metadata. Use of metafields was not widespread in the sample and knowledge of metafields in HTML varied greatly among Web object creators. The study formed part of a metadata project funded by the OCLC Library and Information Science Research Grant Program
  12. Qin, J.: Representation and organization of information in the Web space : from MARC to XML (2000) 0.01
    0.0065361084 = product of:
      0.03268054 = sum of:
        0.03268054 = product of:
          0.06536108 = sum of:
            0.06536108 = weight(_text_:web in 3918) [ClassicSimilarity], result of:
              0.06536108 = score(doc=3918,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.43268442 = fieldWeight in 3918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3918)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
  13. Liu, X.; Qin, J.: ¬An interactive metadata model for structural, descriptive, and referential representation of scholarly output (2014) 0.00
    0.004679445 = product of:
      0.023397226 = sum of:
        0.023397226 = weight(_text_:retrieval in 1253) [ClassicSimilarity], result of:
          0.023397226 = score(doc=1253,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.16710453 = fieldWeight in 1253, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1253)
      0.2 = coord(1/5)
    
    Abstract
    The scientific metadata model proposed in this article encompasses both classical descriptive metadata such as those defined in the Dublin Core Metadata Element Set (DC) and the innovative structural and referential metadata properties that go beyond the classical model. Structural metadata capture the structural vocabulary in research publications; referential metadata include not only citations but also data about other types of scholarly output that is based on or related to the same publication. The article describes the structural, descriptive, and referential (SDR) elements of the metadata model and explains the underlying assumptions and justifications for each major component in the model. ScholarWiki, an experimental system developed as a proof of concept, was built over the wiki platform to allow user interaction with the metadata and the editing, deleting, and adding of metadata. By allowing and encouraging scholars (both as authors and as users) to participate in the knowledge and metadata editing and enhancing process, the larger community will benefit from more accurate and effective information retrieval. The ScholarWiki system utilizes machine-learning techniques that can automatically produce self-enhanced metadata by learning from the structural metadata that scholars contribute, which will add intelligence to enhance and update automatically the publication of metadata Wiki pages.
  14. Chau, M.; Wong, C.H.; Zhou, Y.; Qin, J.; Chen, H.: Evaluating the use of search engine development tools in IT education (2010) 0.00
    0.003851439 = product of:
      0.019257195 = sum of:
        0.019257195 = product of:
          0.03851439 = sum of:
            0.03851439 = weight(_text_:web in 3325) [ClassicSimilarity], result of:
              0.03851439 = score(doc=3325,freq=4.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.25496176 = fieldWeight in 3325, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3325)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    It is important for education in computer science and information systems to keep up to date with the latest development in technology. With the rapid development of the Internet and the Web, many schools have included Internet-related technologies, such as Web search engines and e-commerce, as part of their curricula. Previous research has shown that it is effective to use search engine development tools to facilitate students' learning. However, the effectiveness of these tools in the classroom has not been evaluated. In this article, we review the design of three search engine development tools, SpidersRUs, Greenstone, and Alkaline, followed by an evaluation study that compared the three tools in the classroom. In the study, 33 students were divided into 13 groups and each group used the three tools to develop three independent search engines in a class project. Our evaluation results showed that SpidersRUs performed better than the two other tools in overall satisfaction and the level of knowledge gained in their learning experience when using the tools for a class project on Internet applications development.