Search (14 results, page 1 of 1)

  • × author_ss:"Qin, J."
  1. Qin, J.; Wesley, K.: Web indexing with meta fields : a survey of Web objects in polymer chemistry (1998) 0.02
    0.018666606 = product of:
      0.074666426 = sum of:
        0.046849765 = weight(_text_:web in 3589) [ClassicSimilarity], result of:
          0.046849765 = score(doc=3589,freq=10.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.48375595 = fieldWeight in 3589, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3589)
        0.027816659 = weight(_text_:data in 3589) [ClassicSimilarity], result of:
          0.027816659 = score(doc=3589,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.29644224 = fieldWeight in 3589, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3589)
      0.25 = coord(2/8)
    
    Abstract
    Reports results of a study of 4 WWW search engines: AltaVista; Lycos; Excite and WebCrawler to collect data on Web objects on polymer chemistry. 1.037 Web objects were examined for data in 4 categories: document information; use of meta fields; use of images and use of chemical names. Issues raised included: whether to provide metadata elements for parts of entities or whole entities only, the use of metasyntax, problems in representation of special types of objects, and whether links should be considered when encoding metadata. Use of metafields was not widespread in the sample and knowledge of metafields in HTML varied greatly among Web object creators. The study formed part of a metadata project funded by the OCLC Library and Information Science Research Grant Program
  2. Chen, H.; Chung, W.; Qin, J.; Reid, E.; Sageman, M.; Weimann, G.: Uncovering the dark Web : a case study of Jihad on the Web (2008) 0.01
    0.007856944 = product of:
      0.06285555 = sum of:
        0.06285555 = weight(_text_:web in 1880) [ClassicSimilarity], result of:
          0.06285555 = score(doc=1880,freq=18.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.64902663 = fieldWeight in 1880, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1880)
      0.125 = coord(1/8)
    
    Abstract
    While the Web has become a worldwide platform for communication, terrorists share their ideology and communicate with members on the Dark Web - the reverse side of the Web used by terrorists. Currently, the problems of information overload and difficulty to obtain a comprehensive picture of terrorist activities hinder effective and efficient analysis of terrorist information on the Web. To improve understanding of terrorist activities, we have developed a novel methodology for collecting and analyzing Dark Web information. The methodology incorporates information collection, analysis, and visualization techniques, and exploits various Web information sources. We applied it to collecting and analyzing information of 39 Jihad Web sites and developed visualization of their site contents, relationships, and activity levels. An expert evaluation showed that the methodology is very useful and promising, having a high potential to assist in investigation and understanding of terrorist activities by producing results that could potentially help guide both policymaking and intelligence research.
  3. Qin, J.; Zhou, Y.; Chau, M.; Chen, H.: Multilingual Web retrieval : an experiment in English-Chinese business intelligence (2006) 0.01
    0.0053459727 = product of:
      0.04276778 = sum of:
        0.04276778 = weight(_text_:web in 5054) [ClassicSimilarity], result of:
          0.04276778 = score(doc=5054,freq=12.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.4416067 = fieldWeight in 5054, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5054)
      0.125 = coord(1/8)
    
    Abstract
    As increasing numbers of non-English resources have become available on the Web, the interesting and important issue of how Web users can retrieve documents in different languages has arisen. Cross-language information retrieval (CLIP), the study of retrieving information in one language by queries expressed in another language, is a promising approach to the problem. Cross-language information retrieval has attracted much attention in recent years. Most research systems have achieved satisfactory performance on standard Text REtrieval Conference (TREC) collections such as news articles, but CLIR techniques have not been widely studied and evaluated for applications such as Web portals. In this article, the authors present their research in developing and evaluating a multilingual English-Chinese Web portal that incorporates various CLIP techniques for use in the business domain. A dictionary-based approach was adopted and combines phrasal translation, co-occurrence analysis, and pre- and posttranslation query expansion. The portal was evaluated by domain experts, using a set of queries in both English and Chinese. The experimental results showed that co-occurrence-based phrasal translation achieved a 74.6% improvement in precision over simple word-byword translation. When used together, pre- and posttranslation query expansion improved the performance slightly, achieving a 78.0% improvement over the baseline word-by-word translation approach. In general, applying CLIR techniques in Web applications shows promise.
  4. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.01
    0.0052885255 = product of:
      0.021154102 = sum of:
        0.013112898 = weight(_text_:data in 2763) [ClassicSimilarity], result of:
          0.013112898 = score(doc=2763,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.1397442 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.008041205 = product of:
          0.01608241 = sum of:
            0.01608241 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.01608241 = score(doc=2763,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.15476047 = fieldWeight in 2763, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2763)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
  5. Qin, J.: Representation and organization of information in the Web space : from MARC to XML (2000) 0.01
    0.005237962 = product of:
      0.041903697 = sum of:
        0.041903697 = weight(_text_:web in 3918) [ClassicSimilarity], result of:
          0.041903697 = score(doc=3918,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.43268442 = fieldWeight in 3918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3918)
      0.125 = coord(1/8)
    
  6. Qin, J.: Controlled semantics versus social semantics : an epistemological analysis (2008) 0.00
    0.0034770824 = product of:
      0.027816659 = sum of:
        0.027816659 = weight(_text_:data in 2269) [ClassicSimilarity], result of:
          0.027816659 = score(doc=2269,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.29644224 = fieldWeight in 2269, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2269)
      0.125 = coord(1/8)
    
    Content
    Social semantics is more than just tags or vocabularies. It involves the users who contribute the tags, the perceptions of the world, and intentions that the tags are created for. Whilst social semantics is a valuable, massive data source for developing new knowledge systems or validating existing ones, there are also pitfalls and uncertainties. The epistemological analysis presented in this paper is an attempt to explain the differences and connections between social and controlled semantics from the perspective of knowledge theory. The epistemological connection between social and controlled semantics is particularly important: empirical knowledge can provide data source for testing the rational knowledge and rational knowledge can provide reliability and predictability. Such connection will have significant implications for future research on social and controlled semantics.
  7. Qin, J.: ¬A relation typology in knowledge organization systems : case studies in the research data management domain (2018) 0.00
    0.0032782245 = product of:
      0.026225796 = sum of:
        0.026225796 = weight(_text_:data in 4773) [ClassicSimilarity], result of:
          0.026225796 = score(doc=4773,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2794884 = fieldWeight in 4773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=4773)
      0.125 = coord(1/8)
    
  8. Chau, M.; Wong, C.H.; Zhou, Y.; Qin, J.; Chen, H.: Evaluating the use of search engine development tools in IT education (2010) 0.00
    0.003086499 = product of:
      0.024691992 = sum of:
        0.024691992 = weight(_text_:web in 3325) [ClassicSimilarity], result of:
          0.024691992 = score(doc=3325,freq=4.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.25496176 = fieldWeight in 3325, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3325)
      0.125 = coord(1/8)
    
    Abstract
    It is important for education in computer science and information systems to keep up to date with the latest development in technology. With the rapid development of the Internet and the Web, many schools have included Internet-related technologies, such as Web search engines and e-commerce, as part of their curricula. Previous research has shown that it is effective to use search engine development tools to facilitate students' learning. However, the effectiveness of these tools in the classroom has not been evaluated. In this article, we review the design of three search engine development tools, SpidersRUs, Greenstone, and Alkaline, followed by an evaluation study that compared the three tools in the classroom. In the study, 33 students were divided into 13 groups and each group used the three tools to develop three independent search engines in a class project. Our evaluation results showed that SpidersRUs performed better than the two other tools in overall satisfaction and the level of knowledge gained in their learning experience when using the tools for a class project on Internet applications development.
  9. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.00
    0.0030154518 = product of:
      0.024123615 = sum of:
        0.024123615 = product of:
          0.04824723 = sum of:
            0.04824723 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.04824723 = score(doc=3895,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    24. 8.2005 19:20:22
  10. Qin, J.: Semantic patterns in bibliographically coupled documents (2002) 0.00
    0.0026096138 = product of:
      0.02087691 = sum of:
        0.02087691 = product of:
          0.04175382 = sum of:
            0.04175382 = weight(_text_:mining in 4266) [ClassicSimilarity], result of:
              0.04175382 = score(doc=4266,freq=2.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                0.24936332 = fieldWeight in 4266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4266)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    Different research fields have different definitions for semantic patterns. For knowledge discovery and representation, semantic patterns represent the distribution of occurrences of words in documents and/or citations. In the broadest sense, the term semantic patterns may also refer to the distribution of occurrences of subjects or topics as reflected in documents. The semantic pattern in a set of documents or a group of topics therefore implies quantitative indicators that describe the subject characteristics of the documents being examined. These characteristics are often described by frequencies of keyword occurrences, number of co-occurred keywords, occurrences of coword, and number of cocitations. There are many ways to analyze and derive semantic patterns in documents and citations. A typical example is text mining in full-text documents, a research topic that studies how to extract useful associations and patterns through clustering, categorizing, and summarizing words in texts. One unique way in library and information science is to discover semantic patterns through bibliographically coupled citations. The history of bibliographical coupling goes back in the early 1960s when Kassler investigated associations among technical reports and technical information flow patterns. A number of definitions may facilitate our understanding of bibliographic coupling: (1) bibliographic coupling determines meaningful relations between papers by a study of each paper's bibliography; (2) a unit of coupling is the functional bond between papers when they share a single reference item; (3) coupling strength shows the order of combinations of units of coupling into a graded scale between groups of papers; and (4) a coupling criterion is the way by which the coupling units are combined between two or more papers. Kessler's classic paper an bibliographic coupling between scientific papers proposes the following two graded criteria: Criterion A: A number of papers constitute a related group GA if each member of the group has at least one coupling unit to a given test paper P0. The coupling strength between P0 and any member of GA is measured by the number of coupling units n between them. G(subA)(supn) is that portion of GA that is linked to P0 through n coupling units; Criterion B: A number of papers constitute a related group GB if each member of the group has at least one coupling unit to every other member of the group.
  11. Qin, J.; Chen, J.: ¬A multi-layered, multi-dimensional representation of digital educational resources (2003) 0.00
    0.0024586683 = product of:
      0.019669347 = sum of:
        0.019669347 = weight(_text_:data in 3818) [ClassicSimilarity], result of:
          0.019669347 = score(doc=3818,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2096163 = fieldWeight in 3818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3818)
      0.125 = coord(1/8)
    
    Abstract
    Semantic mapping between controlled vocabulary and keywords is the first step towards knowledge-based subject access. This study reports the preliminary result of a semantic mapping experiment for the Gateway to Educational Materials (GEM). A total of 3,555 keywords were mapped with 322 concept names in the GEM controlled vocabulary. The preliminary test to 10,000 metadata records presented widely varied sets of results between the mapped and non-mapped data. The paper discussed linguistic and technical problems encountered in the mapping process and raised issues in the representation technologies and methods, which will lead to future study of knowledge-based access to networked information resources.
  12. Qin, J.; Hernández, N.: Building interoperable vocabulary and structures for learning objects : an empirical study (2006) 0.00
    0.0020488903 = product of:
      0.016391123 = sum of:
        0.016391123 = weight(_text_:data in 4926) [ClassicSimilarity], result of:
          0.016391123 = score(doc=4926,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.17468026 = fieldWeight in 4926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4926)
      0.125 = coord(1/8)
    
    Abstract
    The structural, functional, and production views on learning objects influence metadata structure and vocabulary. The authors drew on these views and conducted a literature review and in-depth analysis of 14 learning objects and over 500 components in these learning objects to model the knowledge framework for a learning object ontology. The learning object ontology reported in this article consists of 8 top-level classes, 28 classes at the second level, and 34 at the third level. Except class Learning object, all other classes have the three properties of preferred term, related term, and synonym. To validate the ontology, we conducted a query log analysis that focused an discovering what terms users have used at both conceptual and word levels. The findings show that the main classes in the ontology are either conceptually or linguistically similar to the top terms in the query log data. The authors built an "Exercise Editor" as an informal experiment to test its adoption ability in authoring tools. The main contribution of this project is in the framework for the learning object domain and the methodology used to develop and validate an ontology.
  13. Liu, X.; Qin, J.: ¬An interactive metadata model for structural, descriptive, and referential representation of scholarly output (2014) 0.00
    0.0020488903 = product of:
      0.016391123 = sum of:
        0.016391123 = weight(_text_:data in 1253) [ClassicSimilarity], result of:
          0.016391123 = score(doc=1253,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.17468026 = fieldWeight in 1253, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1253)
      0.125 = coord(1/8)
    
    Abstract
    The scientific metadata model proposed in this article encompasses both classical descriptive metadata such as those defined in the Dublin Core Metadata Element Set (DC) and the innovative structural and referential metadata properties that go beyond the classical model. Structural metadata capture the structural vocabulary in research publications; referential metadata include not only citations but also data about other types of scholarly output that is based on or related to the same publication. The article describes the structural, descriptive, and referential (SDR) elements of the metadata model and explains the underlying assumptions and justifications for each major component in the model. ScholarWiki, an experimental system developed as a proof of concept, was built over the wiki platform to allow user interaction with the metadata and the editing, deleting, and adding of metadata. By allowing and encouraging scholars (both as authors and as users) to participate in the knowledge and metadata editing and enhancing process, the larger community will benefit from more accurate and effective information retrieval. The ScholarWiki system utilizes machine-learning techniques that can automatically produce self-enhanced metadata by learning from the structural metadata that scholars contribute, which will add intelligence to enhance and update automatically the publication of metadata Wiki pages.
  14. Chen, M.; Liu, X.; Qin, J.: Semantic relation extraction from socially-generated tags : a methodology for metadata generation (2008) 0.00
    0.0012564383 = product of:
      0.010051507 = sum of:
        0.010051507 = product of:
          0.020103013 = sum of:
            0.020103013 = weight(_text_:22 in 2648) [ClassicSimilarity], result of:
              0.020103013 = score(doc=2648,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.19345059 = fieldWeight in 2648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2648)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas