Diese Datenbank enthält über 40.000 Dokumente zu Themen aus den Bereichen Formalerschließung – Inhaltserschließung – Information Retrieval.
© 2015 W. Gödert, TH Köln, Institut für Informationswissenschaft / Powered by litecat, BIS Oldenburg (Stand: 04. Juni 2021)
1Wong, K. ; Walton, G. ; Bailey, G.: Using information science to enhance educational preventing violent extremism programs.
In: Journal of the Association for Information Science and Technology. 72(2021) no.3, S.362-376.
Abstract: Educational preventing violent extremism (EPVE) programs have had (to date) little if any theoretical underpinning. Given their proliferation in jurisdictions such as Canada, Australia, the United Kingdom, and other European countries, such an absence is notable but not unexpected given the political sensitivities attached to them. These programs remain an emerging policy area which is still "finding its feet," around which their legitimacy and efficacy is keenly debated. This paper argues for adopting theoretical principles drawn from information science research based upon information behavior models to provide a framework for the design and development of such programs and against which their efficacy can be tested. We demonstrate how this approach can be applied through thematic analysis of the theory of change models of EPVE programs implemented in England and Wales, designed to increase awareness and understanding of radicalization among young people, their carers, and professionals. This article is ground breaking and of international significance, being the first to apply learning from information science to practice in furthering policy goals around countering radicalization and extremism in the United Kingdom and other jurisdictions.
Inhalt: Vgl.: https://asistdl.onlinelibrary.wiley.com/toc/23301643/2021/72/3.
2Ren, Y. ; Tomko, M. ; Salim, F.D. ; Ong, K. ; Sanderson, M.: Analyzing Web behavior in indoor retail spaces.
In: Journal of the Association for Information Science and Technology. 68(2017) no.1, S.62-76.
Abstract: We analyze 18- million rows of Wi-Fi access logs collected over a 1-year period from over 120,000 anonymized users at an inner city shopping mall. The anonymized data set gathered from an opt-in system provides users' approximate physical location as well as web browsing and some search history. Such data provide a unique opportunity to analyze the interaction between people's behavior in physical retail spaces and their web behavior, serving as a proxy to their information needs. We found that (a) there is a weekly periodicity in users' visits to the mall; (b) people tend to visit similar mall locations and web content during their repeated visits to the mall; (c) around 60% of registered Wi-Fi users actively browse the web, and around 10% of them use Wi-Fi for accessing web search engines; (d) people are likely to spend a relatively constant amount of time browsing the web while the duration of their visit may vary; (e) the physical spatial context has a small, but significant, influence on the web content that indoor users browse; and (f) accompanying users tend to access resources from the same web domains.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23587/full.
3Long, K. ; Thompson, S. ; Potvin, S. ; Rivero, M.: ¬The "wicked problem" of neutral description : toward a documentation approach to metadata standards.
In: Cataloging and classification quarterly. 55(2017) no.3, S.107-128.
Abstract: Increasingly, metadata standards have been recognized as constructed rather than neutral. In this article, we argue for the importance of a documentation approach to metadata standards creation as a codification of this growing recognition. By making design decisions explicit, the documentation approach dispels presumptions of neutrality and, drawing on the "wicked problems" theoretical framework, acknowledges the constructed nature of standards as "clumsy solutions."
Inhalt: Vgl.: https://doi.org/10.1080/01639374.2016.1278419.
4Khoo, C.S.G. ; Teng, T.B.-R. ; Ng, H.-C. ; Wong, K.-P.: Developing a taxonomy to support user browsing and learning in a digital heritage portal with crowd-sourced content.
In: Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik. Würzburg : Ergon Verlag, 2014. S.266-273.
(Advances in knowledge organization; vol. 14)
Abstract: A taxonomy is being developed to organize the content of a cultural heritage portal called Singapore Memory Portal, that provides access to a collection of memory postings about Singapore's history, culture, society, life/lifestyle and landscape/architecture. The taxonomy is divided into an upper-level taxonomy to support user browsing of topics, and a lower-level taxonomy to represent the types of information available on specific topics, to support user learning and information synthesis. The initial version of the upper-level taxonomy was developed based on potential users' expectations of the content coverage of the portal. The categories are centered on the themes of daily life/lifestyle, historically significant events, disasters and crises, festivals, a variety of cultural elements and national issues. The lower-level taxonomy was derived from attributes and relations extracted from essays and mindmaps produced by coders after reading memory postings for a sample of topics.
Inhalt: Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/aiko_vol_14_2014_37.pdf.
5Yang, P. ; Gao, W. ; Tan, Q. ; Wong, K.-F.: ¬A link-bridged topic model for cross-domain document classification.
In: Information processing and management. 49(2013) no.6, S.1181-1193.
Abstract: Transfer learning utilizes labeled data available from some related domain (source domain) for achieving effective knowledge transformation to the target domain. However, most state-of-the-art cross-domain classification methods treat documents as plain text and ignore the hyperlink (or citation) relationship existing among the documents. In this paper, we propose a novel cross-domain document classification approach called Link-Bridged Topic model (LBT). LBT consists of two key steps. Firstly, LBT utilizes an auxiliary link network to discover the direct or indirect co-citation relationship among documents by embedding the background knowledge into a graph kernel. The mined co-citation relationship is leveraged to bridge the gap across different domains. Secondly, LBT simultaneously combines the content information and link structures into a unified latent topic model. The model is based on an assumption that the documents of source and target domains share some common topics from the point of view of both content information and link structure. By mapping both domains data into the latent topic spaces, LBT encodes the knowledge about domain commonality and difference as the shared topics with associated differential probabilities. The learned latent topics must be consistent with the source and target data, as well as content and link statistics. Then the shared topics act as the bridge to facilitate knowledge transfer from the source to the target domains. Experiments on different types of datasets show that our algorithm significantly improves the generalization performance of cross-domain document classification.
Inhalt: Vgl.: doi: 10.1016/j.ipm.2013.05.002.
Themenfeld: Automatisches Klassifizieren
6Wu, H.C. ; Luk, R.W.P. ; Wong, K.F, ; Kwok, K.L.: ¬A retrospective study of a hybrid document-context based retrieval model.
In: Information processing and management. 43(2007) no.5, S.1308-1331.
Abstract: This paper describes our novel retrieval model that is based on contexts of query terms in documents (i.e., document contexts). Our model is novel because it explicitly takes into account of the document contexts instead of implicitly using the document contexts to find query expansion terms. Our model is based on simulating a user making relevance decisions, and it is a hybrid of various existing effective models and techniques. It estimates the relevance decision preference of a document context as the log-odds and uses smoothing techniques as found in language models to solve the problem of zero probabilities. It combines these estimated preferences of document contexts using different types of aggregation operators that comply with different relevance decision principles (e.g., aggregate relevance principle). Our model is evaluated using retrospective experiments (i.e., with full relevance information), because such experiments can (a) reveal the potential of our model, (b) isolate the problems of the model from those of the parameter estimation, (c) provide information about the major factors affecting the retrieval effectiveness of the model, and (d) show that whether the model obeys the probability ranking principle. Our model is promising as its mean average precision is 60-80% in our experiments using different TREC ad hoc English collections and the NTCIR-5 ad hoc Chinese collection. Our experiments showed that (a) the operators that are consistent with aggregate relevance principle were effective in combining the estimated preferences, and (b) that estimating probabilities using the contexts in the relevant documents can produce better retrieval effectiveness than using the entire relevant documents.
7Wong, K.: Frühe Spuren des menschlichen Geistes.
In: Spektrum der Wissenschaft. 2005, H.12, S.38-46.
Abstract: Unser überragender Verstand und damit auch unsere Kreativität könnten viel älter sein als bisher gedacht. In Afrika benutzten Menschen wohl schon vor über 70.000 Jahren Symbole
8Wyatt, A.M. ; Wong, K.: ¬The University of Oklahoma Library's digitization of title pages project.
In: Cataloging and classification quarterly. 38(2004) no.1, S.55-63.
Abstract: In this article the relationship between classification/indexing and retrieval is discussed. In library and information science, classification and retrieval have always been closely associated with each other. But in certain ages, because of a lack of interest in applying knowledge, it was thought that libraries were just a place for gathering and keeping books and other documents as assets. And therefore, people thought that classification was simply for arrangement, in order to have a kind of system for objects that they considered to be luxuries. The reason for this lies in their static view of things, including libraries. Changing attitudes and having a dynamic view of the world of reality will change everything. Thus, if we define that the library is not only a place for book collection but is a place where people fill their information needs, and also that librarianship is not mainly about classification, but is a discipline by which we retrieve information and receive knowledge, we may see a great change in the retrieval process.
9Bodoff, D. ; Wu, B. ; Wong, K.Y.M.: Relevance data for language models using maximum likelihood.
In: Journal of the American Society for Information Science and technology. 54(2003) no.11, S.1050-1061.
Abstract: We present a preliminary empirical test of a maximum likelihood approach to using relevance data for training information retrieval (IR) parameters. Similar to language models, our method uses explicitly hypothesized distributions for documents and queries, but we add to this an explicitly hypothesized distribution for relevance judgments. The method unifies document-oriented and query-oriented views. Performance is better than the Rocchio heuristic for document and/or query modification. The maximum likelihood methodology also motivates a heuristic estimate of the MLE optimization. The method can be used to test competing hypotheses regarding the processes of authors' term selection, searchers' term selection, and assessors' relevancy judgments.
10Li, W. ; Wong, K.-F. ; Yuan, C.: Toward automatic Chinese temporal information extraction.
In: Journal of the American Society for Information Science and technology. 52(2001) no.9, S.748-762.
Abstract: Over the past few years, temporal information processing and temporal database management have increasingly become hot topics. Nevertheless, only a few researchers have investigated these areas in the Chinese language. This lays down the objective of our research: to exploit Chinese language processing techniques for temporal information extraction and concept reasoning. In this article, we first study the mechanism for expressing time in Chinese. On the basis of the study, we then design a general frame structure for maintaining the extracted temporal concepts and propose a system for extracting time-dependent information from Hong Kong financial news. In the system, temporal knowledge is represented by different types of temporal concepts (TTC) and different temporal relations, including absolute and relative relations, which are used to correlate between action times and reference times. In analyzing a sentence, the algorithm first determines the situation related to the verb. This in turn will identify the type of temporal concept associated with the verb. After that, the relevant temporal information is extracted and the temporal relations are derived. These relations link relevant concept frames together in chronological order, which in turn provide the knowledge to fulfill users' queries, e.g., for question-answering (i.e., Q&A) applications
Themenfeld: Automatisches Indexieren ; Computerlinguistik
11Goodrum, A.A. ; Rorvig, M.E. ; Jeong, K.-T. ; Suresh, C.: ¬An open source agenda for research linking text and image content features.
In: Journal of the American Society for Information Science and technology. 52(2001) no.11, S.948-953.
Abstract: The use of primitive content features of images for classification and retrieval has matured over the past decade. However, human beings often prefer to locate images using words. This article proposes a number of methods to utilize image primitives to support term assignment for image classification. Further, the authors propose to release code for image analysis in a common tool set for other researchers to use. Of particular interest to the authors is the expansion of work by researchers in image indexing to include image content based feature extraction capabilities in their work
Behandelte Form: Bilder
12Lam, W. ; Wong, K.-F. ; Wong, C.-Y.: Chinese document indexing based on new partitioned signature file : model and evaluation.
In: Journal of the American Society for Information Science and technology. 52(2001) no.7, S.584-597.
Abstract: In this article we investigate the use of signature files in Chinese information retrieval system and propose a new partitioning method for Chinese signature file based on the characteristic of Chinese words. Our partitioning method, called Partitioned Signature File for Chinese (PSFC), offers faster search efficiency than the traditional single signature file approach. We devise a general scheme for controlling the trade-off between the false drop and storage overhead while maintaining the search space reduction in PSFC. An analytical study is presented to support the claims of our method. We also propose two new hashing methods for Chinese signature files so that the signature file will be more suitable for dynamic environment while the retrieval performance is maintained. Furthermore, we have implemented PSFC and the new hashing methods, and we evaluated them using a large-scale real-world Chinese document corpus, namely, the TREC-5 (Text REtrieval Conference) Chinese collection. The experimental results confirm the features of PSFC and demonstrate its superiority over the traditional single signature file method
14Cheng, K.-S. ; Young, G.H. ; Wong, K.-F.: ¬A study on word-based and integral-bit Chinese text compression algorithms.
In: Journal of the American Society for Information Science. 50(1999) no.3, S.218-228.
Abstract: Experimental results show that a word-based arithmetic coding scheme can achieve a higher compression performance for Chinese text. However, an arithmetic coding scheme is a fractional-bit compression algorithm which is known to be time comsuming. In this article, we change the direction to study how to cascade the word segmentation model with a faster alternative, the integral-bit compression algorithm. It is shown that the cascaded algorithm is mor suitable for practical usage.
15Furlong, K. ; Roberts, F.D.: If you teach it, will they learn? : Information literacy and reference services in a college library.
In: Computers in libraries. 18(1998) no.5, S.22-25.
Abstract: Describes the development, funding and staffing of the Information Literacy Program (ILP) at the Mantor Library at the University of Maine at Farmington (UMF). The programme aims at helping both UMF students and community patrons to understand better how and where to look for information. Instruction takes place in an electronic classroom equipped with 21 computers running campus-standard Web-browsers and word processing; the instructor's station can control all of the computers in the classroom, or the instructor may pass or share control with students. Discusses issues relating to campus politics, the positioning of the programme in the college experience, the necessity of teaching evaluation skills, and the programme's impact on reference services. gives advice to other libraries considering a similar project
Themenfeld: Informationsdienstleistungen ; Ausbildung
16Day, P.A. ; Armstrong, K.L.: Librarians, faculty, and the Internet : developing a new information partnership.
In: Computers in libraries. 16(1996) no.5, S.56-58.
Abstract: Describes the work of librarians at Milner Library, Illinois State University, USA, in teaching faculty about the Internet. 2 projects were executed: development of home pages for individual departments, and a demonstration of discipline specific sources selected for teaching faculty about the Internet. Describes the progress, successes and failures of these projects, and future plans
17Fong, K.Y.: Interpretive object-oriented facility which can access precompiled classes.
(US patent; no. 5.459.868)