Search (12 results, page 1 of 1)

  • × author_ss:"Qin, J."
  1. Qin, J.; Hernández, N.: Building interoperable vocabulary and structures for learning objects : an empirical study (2006) 0.01
    0.007351031 = product of:
      0.10291443 = sum of:
        0.10291443 = weight(_text_:log in 4926) [ClassicSimilarity], result of:
          0.10291443 = score(doc=4926,freq=4.0), product of:
            0.205552 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0320743 = queryNorm
            0.5006735 = fieldWeight in 4926, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4926)
      0.071428575 = coord(1/14)
    
    Abstract
    The structural, functional, and production views on learning objects influence metadata structure and vocabulary. The authors drew on these views and conducted a literature review and in-depth analysis of 14 learning objects and over 500 components in these learning objects to model the knowledge framework for a learning object ontology. The learning object ontology reported in this article consists of 8 top-level classes, 28 classes at the second level, and 34 at the third level. Except class Learning object, all other classes have the three properties of preferred term, related term, and synonym. To validate the ontology, we conducted a query log analysis that focused an discovering what terms users have used at both conceptual and word levels. The findings show that the main classes in the ontology are either conceptually or linguistically similar to the top terms in the query log data. The authors built an "Exercise Editor" as an informal experiment to test its adoption ability in authoring tools. The main contribution of this project is in the framework for the learning object domain and the methodology used to develop and validate an ontology.
  2. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.01
    0.006218502 = product of:
      0.04352951 = sum of:
        0.034838263 = weight(_text_:source in 2763) [ClassicSimilarity], result of:
          0.034838263 = score(doc=2763,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.21909484 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.008691249 = product of:
          0.017382499 = sum of:
            0.017382499 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.017382499 = score(doc=2763,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.15476047 = fieldWeight in 2763, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2763)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
  3. Qin, J.: Controlled semantics versus social semantics : an epistemological analysis (2008) 0.01
    0.0052787946 = product of:
      0.07390312 = sum of:
        0.07390312 = weight(_text_:source in 2269) [ClassicSimilarity], result of:
          0.07390312 = score(doc=2269,freq=4.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.46477038 = fieldWeight in 2269, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.046875 = fieldNorm(doc=2269)
      0.071428575 = coord(1/14)
    
    Content
    Social semantics is more than just tags or vocabularies. It involves the users who contribute the tags, the perceptions of the world, and intentions that the tags are created for. Whilst social semantics is a valuable, massive data source for developing new knowledge systems or validating existing ones, there are also pitfalls and uncertainties. The epistemological analysis presented in this paper is an attempt to explain the differences and connections between social and controlled semantics from the perspective of knowledge theory. The epistemological connection between social and controlled semantics is particularly important: empirical knowledge can provide data source for testing the rational knowledge and rational knowledge can provide reliability and predictability. Such connection will have significant implications for future research on social and controlled semantics.
  4. Chen, H.; Chung, W.; Qin, J.; Reid, E.; Sageman, M.; Weimann, G.: Uncovering the dark Web : a case study of Jihad on the Web (2008) 0.00
    0.0048526246 = product of:
      0.06793674 = sum of:
        0.06793674 = weight(_text_:web in 1880) [ClassicSimilarity], result of:
          0.06793674 = score(doc=1880,freq=18.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.64902663 = fieldWeight in 1880, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1880)
      0.071428575 = coord(1/14)
    
    Abstract
    While the Web has become a worldwide platform for communication, terrorists share their ideology and communicate with members on the Dark Web - the reverse side of the Web used by terrorists. Currently, the problems of information overload and difficulty to obtain a comprehensive picture of terrorist activities hinder effective and efficient analysis of terrorist information on the Web. To improve understanding of terrorist activities, we have developed a novel methodology for collecting and analyzing Dark Web information. The methodology incorporates information collection, analysis, and visualization techniques, and exploits various Web information sources. We applied it to collecting and analyzing information of 39 Jihad Web sites and developed visualization of their site contents, relationships, and activity levels. An expert evaluation showed that the methodology is very useful and promising, having a high potential to assist in investigation and understanding of terrorist activities by producing results that could potentially help guide both policymaking and intelligence research.
  5. Liu, X.; Qin, J.: ¬An interactive metadata model for structural, descriptive, and referential representation of scholarly output (2014) 0.00
    0.0043470273 = product of:
      0.060858376 = sum of:
        0.060858376 = product of:
          0.12171675 = sum of:
            0.12171675 = weight(_text_:wiki in 1253) [ClassicSimilarity], result of:
              0.12171675 = score(doc=1253,freq=4.0), product of:
                0.22354181 = queryWeight, product of:
                  6.9694996 = idf(docFreq=112, maxDocs=44218)
                  0.0320743 = queryNorm
                0.5444921 = fieldWeight in 1253, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.9694996 = idf(docFreq=112, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1253)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    The scientific metadata model proposed in this article encompasses both classical descriptive metadata such as those defined in the Dublin Core Metadata Element Set (DC) and the innovative structural and referential metadata properties that go beyond the classical model. Structural metadata capture the structural vocabulary in research publications; referential metadata include not only citations but also data about other types of scholarly output that is based on or related to the same publication. The article describes the structural, descriptive, and referential (SDR) elements of the metadata model and explains the underlying assumptions and justifications for each major component in the model. ScholarWiki, an experimental system developed as a proof of concept, was built over the wiki platform to allow user interaction with the metadata and the editing, deleting, and adding of metadata. By allowing and encouraging scholars (both as authors and as users) to participate in the knowledge and metadata editing and enhancing process, the larger community will benefit from more accurate and effective information retrieval. The ScholarWiki system utilizes machine-learning techniques that can automatically produce self-enhanced metadata by learning from the structural metadata that scholars contribute, which will add intelligence to enhance and update automatically the publication of metadata Wiki pages.
  6. Qin, J.; Wesley, K.: Web indexing with meta fields : a survey of Web objects in polymer chemistry (1998) 0.00
    0.0036169332 = product of:
      0.050637063 = sum of:
        0.050637063 = weight(_text_:web in 3589) [ClassicSimilarity], result of:
          0.050637063 = score(doc=3589,freq=10.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.48375595 = fieldWeight in 3589, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3589)
      0.071428575 = coord(1/14)
    
    Abstract
    Reports results of a study of 4 WWW search engines: AltaVista; Lycos; Excite and WebCrawler to collect data on Web objects on polymer chemistry. 1.037 Web objects were examined for data in 4 categories: document information; use of meta fields; use of images and use of chemical names. Issues raised included: whether to provide metadata elements for parts of entities or whole entities only, the use of metasyntax, problems in representation of special types of objects, and whether links should be considered when encoding metadata. Use of metafields was not widespread in the sample and knowledge of metafields in HTML varied greatly among Web object creators. The study formed part of a metadata project funded by the OCLC Library and Information Science Research Grant Program
  7. Qin, J.; Zhou, Y.; Chau, M.; Chen, H.: Multilingual Web retrieval : an experiment in English-Chinese business intelligence (2006) 0.00
    0.0033017928 = product of:
      0.046225097 = sum of:
        0.046225097 = weight(_text_:web in 5054) [ClassicSimilarity], result of:
          0.046225097 = score(doc=5054,freq=12.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.4416067 = fieldWeight in 5054, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5054)
      0.071428575 = coord(1/14)
    
    Abstract
    As increasing numbers of non-English resources have become available on the Web, the interesting and important issue of how Web users can retrieve documents in different languages has arisen. Cross-language information retrieval (CLIP), the study of retrieving information in one language by queries expressed in another language, is a promising approach to the problem. Cross-language information retrieval has attracted much attention in recent years. Most research systems have achieved satisfactory performance on standard Text REtrieval Conference (TREC) collections such as news articles, but CLIR techniques have not been widely studied and evaluated for applications such as Web portals. In this article, the authors present their research in developing and evaluating a multilingual English-Chinese Web portal that incorporates various CLIP techniques for use in the business domain. A dictionary-based approach was adopted and combines phrasal translation, co-occurrence analysis, and pre- and posttranslation query expansion. The portal was evaluated by domain experts, using a set of queries in both English and Chinese. The experimental results showed that co-occurrence-based phrasal translation achieved a 74.6% improvement in precision over simple word-byword translation. When used together, pre- and posttranslation query expansion improved the performance slightly, achieving a 78.0% improvement over the baseline word-by-word translation approach. In general, applying CLIR techniques in Web applications shows promise.
  8. Qin, J.: Representation and organization of information in the Web space : from MARC to XML (2000) 0.00
    0.0032350828 = product of:
      0.04529116 = sum of:
        0.04529116 = weight(_text_:web in 3918) [ClassicSimilarity], result of:
          0.04529116 = score(doc=3918,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.43268442 = fieldWeight in 3918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3918)
      0.071428575 = coord(1/14)
    
  9. Qin, J.; Creticos, P.; Hsiao, W.Y.: Adaptive modeling of workforce domain knowledge (2006) 0.00
    0.00307984 = product of:
      0.043117758 = sum of:
        0.043117758 = weight(_text_:open in 2519) [ClassicSimilarity], result of:
          0.043117758 = score(doc=2519,freq=2.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.2985229 = fieldWeight in 2519, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=2519)
      0.071428575 = coord(1/14)
    
    Abstract
    Workforce development is a multidisciplinary domain in which policy, laws and regulations, social services, training and education, and information technology and systems are heavily involved. It is essential to have a semantic base accepted by the workforce development community for knowledge sharing and exchange. This paper describes how such a semantic base-the Workforce Open Knowledge Exchange (WOKE) Ontology-was built by using the adaptive modeling approach. The focus of this paper is to address questions such as how ontology designers should extract and model concepts obtained from different sources and what methodologies are useful along the steps of ontology development. The paper proposes a methodology framework "adaptive modeling" and explains the methodology through examples and some lessons learned from the process of developing the WOKE ontology.
  10. Chau, M.; Wong, C.H.; Zhou, Y.; Qin, J.; Chen, H.: Evaluating the use of search engine development tools in IT education (2010) 0.00
    0.001906291 = product of:
      0.026688073 = sum of:
        0.026688073 = weight(_text_:web in 3325) [ClassicSimilarity], result of:
          0.026688073 = score(doc=3325,freq=4.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.25496176 = fieldWeight in 3325, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3325)
      0.071428575 = coord(1/14)
    
    Abstract
    It is important for education in computer science and information systems to keep up to date with the latest development in technology. With the rapid development of the Internet and the Web, many schools have included Internet-related technologies, such as Web search engines and e-commerce, as part of their curricula. Previous research has shown that it is effective to use search engine development tools to facilitate students' learning. However, the effectiveness of these tools in the classroom has not been evaluated. In this article, we review the design of three search engine development tools, SpidersRUs, Greenstone, and Alkaline, followed by an evaluation study that compared the three tools in the classroom. In the study, 33 students were divided into 13 groups and each group used the three tools to develop three independent search engines in a class project. Our evaluation results showed that SpidersRUs performed better than the two other tools in overall satisfaction and the level of knowledge gained in their learning experience when using the tools for a class project on Internet applications development.
  11. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.00
    0.0018624107 = product of:
      0.026073748 = sum of:
        0.026073748 = product of:
          0.052147496 = sum of:
            0.052147496 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.052147496 = score(doc=3895,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    24. 8.2005 19:20:22
  12. Chen, M.; Liu, X.; Qin, J.: Semantic relation extraction from socially-generated tags : a methodology for metadata generation (2008) 0.00
    7.7600445E-4 = product of:
      0.010864062 = sum of:
        0.010864062 = product of:
          0.021728124 = sum of:
            0.021728124 = weight(_text_:22 in 2648) [ClassicSimilarity], result of:
              0.021728124 = score(doc=2648,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.19345059 = fieldWeight in 2648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2648)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas