Search (255 results, page 12 of 13)

  • × language_ss:"e"
  • × theme_ss:"Literaturübersicht"
  1. Downie, J.S.: Music information retrieval (2002) 0.00
    0.0044597755 = product of:
      0.017839102 = sum of:
        0.017839102 = weight(_text_:information in 4287) [ClassicSimilarity], result of:
          0.017839102 = score(doc=4287,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20156369 = fieldWeight in 4287, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4287)
      0.25 = coord(1/4)
    
    Abstract
    Imagine a world where you walk up to a computer and sing the song fragment that has been plaguing you since breakfast. The computer accepts your off-key singing, corrects your request, and promptly suggests to you that "Camptown Races" is the cause of your irritation. You confirm the computer's suggestion by listening to one of the many MP3 files it has found. Satisfied, you kindly decline the offer to retrieve all extant versions of the song, including a recently released Italian rap rendition and an orchestral score featuring a bagpipe duet. Does such a system exist today? No. Will it in the future? Yes. Will such a system be easy to produce? Most decidedly not. Myriad difficulties remain to be overcome before the creation, deployment, and evaluation of robust, large-scale, and content-based Music Information Retrieval (MIR) systems become reality. The dizzyingly complex interaction of music's pitch, temporal, harmonic, timbral, editorial, textual, and bibliographic "facets," for example, demonstrates just one of MIR's perplexing problems. The choice of music representation-whether symbol-based, audio-based, or both-further compounds matters, as each choice determines bandwidth, computation, storage, retrieval, and interface requirements and capabilities.
    Source
    Annual review of information science and technology. 37(2003), S.295-342
  2. Borgman, C.L.; Furner, J.: Scholarly communication and bibliometrics (2002) 0.00
    0.0044597755 = product of:
      0.017839102 = sum of:
        0.017839102 = weight(_text_:information in 4291) [ClassicSimilarity], result of:
          0.017839102 = score(doc=4291,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20156369 = fieldWeight in 4291, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4291)
      0.25 = coord(1/4)
    
    Abstract
    Why devote an ARIST chapter to scholarly communication and bibliometrics, and why now? Bibliometrics already is a frequently covered ARIST topic, with chapters such as that by White and McCain (1989) on bibliometrics generally, White and McCain (1997) on visualization of literatures, Wilson and Hood (2001) on informetric laws, and Tabah (2001) on literature dynamics. Similarly, scholarly communication has been addressed in other ARIST chapters such as Bishop and Star (1996) on social informatics and digital libraries, Schamber (1994) on relevance and information behavior, and many earlier chapters on information needs and uses. More than a decade ago, the first author addressed the intersection of scholarly communication and bibliometrics with a journal special issue and an edited book (Borgman, 1990; Borgman & Paisley, 1989), and she recently examined interim developments (Borgman, 2000a, 2000c). This review covers the decade (1990-2000) since the comprehensive 1990 volume, citing earlier works only when necessary to explain the foundation for recent developments.
    Source
    Annual review of information science and technology. 36(2002), S.3-72
  3. Benoit, G.: Data mining (2002) 0.00
    0.0044597755 = product of:
      0.017839102 = sum of:
        0.017839102 = weight(_text_:information in 4296) [ClassicSimilarity], result of:
          0.017839102 = score(doc=4296,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20156369 = fieldWeight in 4296, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4296)
      0.25 = coord(1/4)
    
    Abstract
    Data mining (DM) is a multistaged process of extracting previously unanticipated knowledge from large databases, and applying the results to decision making. Data mining tools detect patterns from the data and infer associations and rules from them. The extracted information may then be applied to prediction or classification models by identifying relations within the data records or between databases. Those patterns and rules can then guide decision making and forecast the effects of those decisions. However, this definition may be applied equally to "knowledge discovery in databases" (KDD). Indeed, in the recent literature of DM and KDD, a source of confusion has emerged, making it difficult to determine the exact parameters of both. KDD is sometimes viewed as the broader discipline, of which data mining is merely a component-specifically pattern extraction, evaluation, and cleansing methods (Raghavan, Deogun, & Sever, 1998, p. 397). Thurasingham (1999, p. 2) remarked that "knowledge discovery," "pattern discovery," "data dredging," "information extraction," and "knowledge mining" are all employed as synonyms for DM. Trybula, in his ARIST chapter an text mining, observed that the "existing work [in KDD] is confusing because the terminology is inconsistent and poorly defined.
    Source
    Annual review of information science and technology. 36(2002), S.265-312
  4. Candela, L.; Castelli, D.; Manghi, P.; Tani, A.: Data journals : a survey (2015) 0.00
    0.0044597755 = product of:
      0.017839102 = sum of:
        0.017839102 = weight(_text_:information in 2156) [ClassicSimilarity], result of:
          0.017839102 = score(doc=2156,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20156369 = fieldWeight in 2156, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2156)
      0.25 = coord(1/4)
    
    Abstract
    Data occupy a key role in our information society. However, although the amount of published data continues to grow and terms such as data deluge and big data today characterize numerous (research) initiatives, much work is still needed in the direction of publishing data in order to make them effectively discoverable, available, and reusable by others. Several barriers hinder data publishing, from lack of attribution and rewards, vague citation practices, and quality issues to a rather general lack of a data-sharing culture. Lately, data journals have overcome some of these barriers. In this study of more than 100 currently existing data journals, we describe the approaches they promote for data set description, availability, citation, quality, and open access. We close by identifying ways to expand and strengthen the data journals approach as a means to promote data set access and exploitation.
    Series
    Advances in information science
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1747-1762
  5. Efthimiadis, E.N.; Neilson, C.: ¬A classified bibliography on online public access catalogues (1989) 0.00
    0.0042914203 = product of:
      0.017165681 = sum of:
        0.017165681 = weight(_text_:information in 509) [ClassicSimilarity], result of:
          0.017165681 = score(doc=509,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 509, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=509)
      0.25 = coord(1/4)
    
    Series
    British Library information guide; 10
  6. Amba, S.: Expert systems : a literature review (1988) 0.00
    0.0042914203 = product of:
      0.017165681 = sum of:
        0.017165681 = weight(_text_:information in 1099) [ClassicSimilarity], result of:
          0.017165681 = score(doc=1099,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 1099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1099)
      0.25 = coord(1/4)
    
    Abstract
    This review covers (1) the literature published in 1985, 1986 and 1987. It is, however, not a comprehensive review. However, two papers published in 1983 and 1984 have been included. (2) It covers only library and information science literature. (3) It does not include descriptions of commercial software packages
  7. Drabenstott, K.M.; Burman, C.M.: Analytical review of the library of the future (1994) 0.00
    0.0042914203 = product of:
      0.017165681 = sum of:
        0.017165681 = weight(_text_:information in 3658) [ClassicSimilarity], result of:
          0.017165681 = score(doc=3658,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 3658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3658)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Journal of librarianship and information science. 28(1996) no.1, S.60-61 (C. Oppenheim)
  8. Ding, Y.: Scholarly communication and bibliometrics : Part 1: The scholarly communication model: literature review (1998) 0.00
    0.0042914203 = product of:
      0.017165681 = sum of:
        0.017165681 = weight(_text_:information in 3995) [ClassicSimilarity], result of:
          0.017165681 = score(doc=3995,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 3995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3995)
      0.25 = coord(1/4)
    
    Source
    International forum on information and documentation. 23(1998) no.2, S.20-29
  9. Khoo, S.G.; Na, J.-C.: Semantic relations in information science (2006) 0.00
    0.0042699096 = product of:
      0.017079638 = sum of:
        0.017079638 = weight(_text_:information in 1978) [ClassicSimilarity], result of:
          0.017079638 = score(doc=1978,freq=22.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19298252 = fieldWeight in 1978, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1978)
      0.25 = coord(1/4)
    
    Abstract
    This chapter examines the nature of semantic relations and their main applications in information science. The nature and types of semantic relations are discussed from the perspectives of linguistics and psychology. An overview of the semantic relations used in knowledge structures such as thesauri and ontologies is provided, as well as the main techniques used in the automatic extraction of semantic relations from text. The chapter then reviews the use of semantic relations in information extraction, information retrieval, question-answering, and automatic text summarization applications. Concepts and relations are the foundation of knowledge and thought. When we look at the world, we perceive not a mass of colors but objects to which we automatically assign category labels. Our perceptual system automatically segments the world into concepts and categories. Concepts are the building blocks of knowledge; relations act as the cement that links concepts into knowledge structures. We spend much of our lives identifying regular associations and relations between objects, events, and processes so that the world has an understandable structure and predictability. Our lives and work depend on the accuracy and richness of this knowledge structure and its web of relations. Relations are needed for reasoning and inferencing. Chaffin and Herrmann (1988b, p. 290) noted that "relations between ideas have long been viewed as basic to thought, language, comprehension, and memory." Aristotle's Metaphysics (Aristotle, 1961; McKeon, expounded on several types of relations. The majority of the 30 entries in a section of the Metaphysics known today as the Philosophical Lexicon referred to relations and attributes, including cause, part-whole, same and opposite, quality (i.e., attribute) and kind-of, and defined different types of each relation. Hume (1955) pointed out that there is a connection between successive ideas in our minds, even in our dreams, and that the introduction of an idea in our mind automatically recalls an associated idea. He argued that all the objects of human reasoning are divided into relations of ideas and matters of fact and that factual reasoning is founded on the cause-effect relation. His Treatise of Human Nature identified seven kinds of relations: resemblance, identity, relations of time and place, proportion in quantity or number, degrees in quality, contrariety, and causation. Mill (1974, pp. 989-1004) discoursed on several types of relations, claiming that all things are either feelings, substances, or attributes, and that attributes can be a quality (which belongs to one object) or a relation to other objects.
    Linguists in the structuralist tradition (e.g., Lyons, 1977; Saussure, 1959) have asserted that concepts cannot be defined on their own but only in relation to other concepts. Semantic relations appear to reflect a logical structure in the fundamental nature of thought (Caplan & Herrmann, 1993). Green, Bean, and Myaeng (2002) noted that semantic relations play a critical role in how we represent knowledge psychologically, linguistically, and computationally, and that many systems of knowledge representation start with a basic distinction between entities and relations. Green (2001, p. 3) said that "relationships are involved as we combine simple entities to form more complex entities, as we compare entities, as we group entities, as one entity performs a process on another entity, and so forth. Indeed, many things that we might initially regard as basic and elemental are revealed upon further examination to involve internal structure, or in other words, internal relationships." Concepts and relations are often expressed in language and text. Language is used not just for communicating concepts and relations, but also for representing, storing, and reasoning with concepts and relations. We shall examine the nature of semantic relations from a linguistic and psychological perspective, with an emphasis on relations expressed in text. The usefulness of semantic relations in information science, especially in ontology construction, information extraction, information retrieval, question-answering, and text summarization is discussed. Research and development in information science have focused on concepts and terms, but the focus will increasingly shift to the identification, processing, and management of relations to achieve greater effectiveness and refinement in information science techniques. Previous chapters in ARIST on natural language processing (Chowdhury, 2003), text mining (Trybula, 1999), information retrieval and the philosophy of language (Blair, 2003), and query expansion (Efthimiadis, 1996) provide a background for this discussion, as semantic relations are an important part of these applications.
    Source
    Annual review of information science and technology. 40(2006), S.157-228
  10. Hogan, D.R.: Cooperative reference service and the referred reference question : an annotated bibliography (1995) 0.00
    0.00424829 = product of:
      0.01699316 = sum of:
        0.01699316 = weight(_text_:information in 5347) [ClassicSimilarity], result of:
          0.01699316 = score(doc=5347,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1920054 = fieldWeight in 5347, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5347)
      0.25 = coord(1/4)
    
    Abstract
    Reference question referral may be as simple as a telephone call by the librarian to another library to obtain the answer to the patron's inquiry while the patron waits. It may also be a formal arrangement for the referral of questions, with specific goals and objectives, protocols, and procedures. Hogan's annotated bibliography of articles about reference question referral covers 1983 to 1994. Included is information on defining cooperative reference and the referred reference question, establishing networks and policies, a historical view of successes and failures, managing and avaluating cooperative systems, and describing methods of transferring information. Academic, public, and government libraries are discussed
  11. Trybula, W.J.: Data mining and knowledge discovery (1997) 0.00
    0.00424829 = product of:
      0.01699316 = sum of:
        0.01699316 = weight(_text_:information in 2300) [ClassicSimilarity], result of:
          0.01699316 = score(doc=2300,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1920054 = fieldWeight in 2300, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2300)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of the recently developed concepts of data mining (defined as the automated process of evaluating data and finding relationships) and knowledge discovery (defined as the automated process of extracting information, especially unpredicted relationships or previously unknown patterns among the data) with particular reference to numerical data. Includes: the knowledge acquisition process; data mining; evaluation methods; and knowledge discovery. Concludes that existing work in the field are confusing because the terminology is inconsistent and poorly defined. Although methods are available for analyzing and cleaning databases, better coordinated efforts should be directed toward providing users with improved means of structuring search mechanisms to explore the data for relationships
    Source
    Annual review of information science and technology. 32(1997), S.197-229
  12. Dumais, S.T.: Latent semantic analysis (2003) 0.00
    0.0040711993 = product of:
      0.016284797 = sum of:
        0.016284797 = weight(_text_:information in 2462) [ClassicSimilarity], result of:
          0.016284797 = score(doc=2462,freq=20.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.18400162 = fieldWeight in 2462, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2462)
      0.25 = coord(1/4)
    
    Abstract
    Latent Semantic Analysis (LSA) was first introduced in Dumais, Furnas, Landauer, and Deerwester (1988) and Deerwester, Dumais, Furnas, Landauer, and Harshman (1990) as a technique for improving information retrieval. The key insight in LSA was to reduce the dimensionality of the information retrieval problem. Most approaches to retrieving information depend an a lexical match between words in the user's query and those in documents. Indeed, this lexical matching is the way that the popular Web and enterprise search engines work. Such systems are, however, far from ideal. We are all aware of the tremendous amount of irrelevant information that is retrieved when searching. We also fail to find much of the existing relevant material. LSA was designed to address these retrieval problems, using dimension reduction techniques. Fundamental characteristics of human word usage underlie these retrieval failures. People use a wide variety of words to describe the same object or concept (synonymy). Furnas, Landauer, Gomez, and Dumais (1987) showed that people generate the same keyword to describe well-known objects only 20 percent of the time. Poor agreement was also observed in studies of inter-indexer consistency (e.g., Chan, 1989; Tarr & Borko, 1974) in the generation of search terms (e.g., Fidel, 1985; Bates, 1986), and in the generation of hypertext links (Furner, Ellis, & Willett, 1999). Because searchers and authors often use different words, relevant materials are missed. Someone looking for documents an "human-computer interaction" will not find articles that use only the phrase "man-machine studies" or "human factors." People also use the same word to refer to different things (polysemy). Words like "saturn," "jaguar," or "chip" have several different meanings. A short query like "saturn" will thus return many irrelevant documents. The query "Saturn Gar" will return fewer irrelevant items, but it will miss some documents that use only the terms "Saturn automobile." In searching, there is a constant tension between being overly specific and missing relevant information, and being more general and returning irrelevant information.
    A number of approaches have been developed in information retrieval to address the problems caused by the variability in word usage. Stemming is a popular technique used to normalize some kinds of surface-level variability by converting words to their morphological root. For example, the words "retrieve," "retrieval," "retrieved," and "retrieving" would all be converted to their root form, "retrieve." The root form is used for both document and query processing. Stemming sometimes helps retrieval, although not much (Harman, 1991; Hull, 1996). And, it does not address Gases where related words are not morphologically related (e.g., physician and doctor). Controlled vocabularies have also been used to limit variability by requiring that query and index terms belong to a pre-defined set of terms. Documents are indexed by a specified or authorized list of subject headings or index terms, called the controlled vocabulary. Library of Congress Subject Headings, Medical Subject Headings, Association for Computing Machinery (ACM) keywords, and Yellow Pages headings are examples of controlled vocabularies. If searchers can find the right controlled vocabulary terms, they do not have to think of all the morphologically related or synonymous terms that authors might have used. However, assigning controlled vocabulary terms in a consistent and thorough manner is a time-consuming and usually manual process. A good deal of research has been published about the effectiveness of controlled vocabulary indexing compared to full text indexing (e.g., Bates, 1998; Lancaster, 1986; Svenonius, 1986). The combination of both full text and controlled vocabularies is often better than either alone, although the size of the advantage is variable (Lancaster, 1986; Markey, Atherton, & Newton, 1982; Srinivasan, 1996). Richer thesauri have also been used to provide synonyms, generalizations, and specializations of users' search terms (see Srinivasan, 1992, for a review). Controlled vocabularies and thesaurus entries can be generated either manually or by the automatic analysis of large collections of texts.
    With the advent of large-scale collections of full text, statistical approaches are being used more and more to analyze the relationships among terms and documents. LSA takes this approach. LSA induces knowledge about the meanings of documents and words by analyzing large collections of texts. The approach simultaneously models the relationships among documents based an their constituent words, and the relationships between words based an their occurrence in documents. By using fewer dimensions for representation than there are unique words, LSA induces similarities among terms that are useful in solving the information retrieval problems described earlier. LSA is a fully automatic statistical approach to extracting relations among words by means of their contexts of use in documents, passages, or sentences. It makes no use of natural language processing techniques for analyzing morphological, syntactic, or semantic relations. Nor does it use humanly constructed resources like dictionaries, thesauri, lexical reference systems (e.g., WordNet), semantic networks, or other knowledge representations. Its only input is large amounts of texts. LSA is an unsupervised learning technique. It starts with a large collection of texts, builds a term-document matrix, and tries to uncover some similarity structures that are useful for information retrieval and related text-analysis problems. Several recent ARIST chapters have focused an text mining and discovery (Benoit, 2002; Solomon, 2002; Trybula, 2000). These chapters provide complementary coverage of the field of text analysis.
    Source
    Annual review of information science and technology. 38(2004), S.189-230
  13. Saracevic, T.: Relevance: a review of the literature and a framework for thinking on the notion in information science. Part II : nature and manifestations of relevance (2007) 0.00
    0.0038383633 = product of:
      0.015353453 = sum of:
        0.015353453 = weight(_text_:information in 612) [ClassicSimilarity], result of:
          0.015353453 = score(doc=612,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1734784 = fieldWeight in 612, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=612)
      0.25 = coord(1/4)
    
    Abstract
    Relevance is a, if not even the, key notion in information science in general and information retrieval in particular. This two-part critical review traces and synthesizes the scholarship on relevance over the past 30 years and provides an updated framework within which the still widely dissonant ideas and works about relevance might be interpreted and related. It is a continuation and update of a similar review that appeared in 1975 under the same title, considered here as being Part I. The present review is organized into two parts: Part II addresses the questions related to nature and manifestations of relevance, and Part III addresses questions related to relevance behavior and effects. In Part II, the nature of relevance is discussed in terms of meaning ascribed to relevance, theories used or proposed, and models that have been developed. The manifestations of relevance are classified as to several kinds of relevance that form an interdependent system of relevances. In Part III, relevance behavior and effects are synthesized using experimental and observational works that incorporate data. In both parts, each section concludes with a summary that in effect provides an interpretation and synthesis of contemporary thinking on the topic treated or suggests hypotheses for future research. Analyses of some of the major trends that shape relevance work are offered in conclusions.
    Content
    Relevant: Having significant and demonstrable bearing on the matter at hand.[Note *][A version of this article has been published in 2006 as a chapter in E.G. Abels & D.A. Nitecki (Eds.), Advances in Librarianship (Vol. 30, pp. 3-71). San Diego: Academic Press. (Saracevic, 2006).] Relevance: The ability as of an information retrieval system to retrieve material that satisfies the needs of the user. - Merriam-Webster Dictionary 2005
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.13, S.1915-1933
  14. Legg, C.: Ontologies on the Semantic Web (2007) 0.00
    0.0038383633 = product of:
      0.015353453 = sum of:
        0.015353453 = weight(_text_:information in 1979) [ClassicSimilarity], result of:
          0.015353453 = score(doc=1979,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1734784 = fieldWeight in 1979, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1979)
      0.25 = coord(1/4)
    
    Abstract
    As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The "Semantic Web" is touted by its developers as equally revolutionary, although it has not yet achieved anything like the Web's exponential uptake. It seeks to transcend a current limitation of the Web - that it largely requires indexing to be accomplished merely on specific character strings. Thus, a person searching for information about "turkey" (the bird) receives from current search engines many irrelevant pages about "Turkey" (the country) and nothing about the Spanish "pavo" even if he or she is a Spanish-speaker able to understand such pages. The Semantic Web vision is to develop technology to facilitate retrieval of information via meanings, not just spellings. For this to be possible, most commentators believe, Semantic Web applications will have to draw on some kind of shared, structured, machine-readable conceptual scheme. Thus, there has been a convergence between the Semantic Web research community and an older tradition with roots in classical Artificial Intelligence (AI) research (sometimes referred to as "knowledge representation") whose goal is to develop a formal ontology. A formal ontology is a machine-readable theory of the most fundamental concepts or "categories" required in order to understand information pertaining to any knowledge domain. A review of the attempts that have been made to realize this goal provides an opportunity to reflect in interestingly concrete ways on various research questions such as the following: - How explicit a machine-understandable theory of meaning is it possible or practical to construct? - How universal a machine-understandable theory of meaning is it possible or practical to construct? - How much (and what kind of) inference support is required to realize a machine-understandable theory of meaning? - What is it for a theory of meaning to be machine-understandable anyway?
    Source
    Annual review of information science and technology. 41(2007), S.407-451
  15. Smeaton, A.F.: Indexing, browsing, and searching of digital video (2003) 0.00
    0.0037164795 = product of:
      0.014865918 = sum of:
        0.014865918 = weight(_text_:information in 4274) [ClassicSimilarity], result of:
          0.014865918 = score(doc=4274,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16796975 = fieldWeight in 4274, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4274)
      0.25 = coord(1/4)
    
    Abstract
    Video is a communications medium that normally brings together moving pictures with a synchronized audio track into a discrete piece or pieces of information. A "piece" of video is variously referred to as a frame, a shot, a scene, a Clip, a program, or an episode; these pieces are distinguished by their length and by their composition. We shall return to the definition of each of these in the section an automatically structuring and indexing digital video. In modern society, Video is commonplace and is usually equated with television, movies, or home Video produced by a Video camera or camcorder. We also accept Video recorded from closed circuit TVs for security and surveillance as part of our daily lives. In short, Video is ubiquitous. Digital Video is, as the name suggests, the creation or capture of Video information in digital format. Most Video produced today, commercial, surveillance, or domestic, is produced in digital form, although the medium of Video predates the development of digital computing by several decades. The essential nature of Video has not changed with the advent of digital computing. It is still moving pictures and synchronized audio. However, the production methods and the end product have gone through significant evolution, in the last decade especially.
    Source
    Annual review of information science and technology. 38(2004), S.371-409
  16. Winget, M.A.: Videogame preservation and massively multiplayer online role-playing games : a review of the literature (2011) 0.00
    0.0036413912 = product of:
      0.014565565 = sum of:
        0.014565565 = weight(_text_:information in 4760) [ClassicSimilarity], result of:
          0.014565565 = score(doc=4760,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 4760, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4760)
      0.25 = coord(1/4)
    
    Series
    Advances in information science
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.10, S.1869-1883
  17. Brooks, D.: System-system interaction in computerized indexing of visual materials : a selected review (1988) 0.00
    0.0034331365 = product of:
      0.013732546 = sum of:
        0.013732546 = weight(_text_:information in 656) [ClassicSimilarity], result of:
          0.013732546 = score(doc=656,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1551638 = fieldWeight in 656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=656)
      0.25 = coord(1/4)
    
    Source
    Information technology and libraries. 7(1988), S.111-123
  18. Taylor, A.G.: Enhancing subject access in online systems : the year's work in subject analysis, 1991 (1992) 0.00
    0.0034331365 = product of:
      0.013732546 = sum of:
        0.013732546 = weight(_text_:information in 1504) [ClassicSimilarity], result of:
          0.013732546 = score(doc=1504,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1551638 = fieldWeight in 1504, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1504)
      0.25 = coord(1/4)
    
    Abstract
    The research literature published in 1991 in the following categories is examined: users and subject searching, subject access in online catalogs, subject cataloging and indexing, information retrieval, thesaurus and indexing approaches, classification, and specialized subjects and materials. The preponderance of the research dealt with improving subject access in online systems. This seems to have been the result of acceptance by many researchers of a number of previously researched hypotheses that, taken together, indicate that improving online systems holds more promise than trying to perfect the processes of subject analysis
  19. Stone, A.T.: That elusive concept of 'aboutness' : the year's work in subject analysis, 1992 (1993) 0.00
    0.0034331365 = product of:
      0.013732546 = sum of:
        0.013732546 = weight(_text_:information in 5353) [ClassicSimilarity], result of:
          0.013732546 = score(doc=5353,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1551638 = fieldWeight in 5353, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5353)
      0.25 = coord(1/4)
    
    Abstract
    Interest in classification theory and in facet-based systems was more evident during 1992, the year that marked the one hundredth anniversary of the birth of Ranganathan. Efforts to simplify subject cataloging routines include exploration of automatic and semiautomatic methods. Solutions to online subject searching problems might be shifting to the domains of information-retrieval experts. The 1992 subject analysis literature is examined and described using the following categories: theoretical foundations, cataloging practices, subject analysis in online environments, and specialized materials and topics
  20. Lowry, A.K.: Electronic texts in the humanities : a selected bibliography (1994) 0.00
    0.0034331365 = product of:
      0.013732546 = sum of:
        0.013732546 = weight(_text_:information in 8743) [ClassicSimilarity], result of:
          0.013732546 = score(doc=8743,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1551638 = fieldWeight in 8743, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=8743)
      0.25 = coord(1/4)
    
    Source
    Information technology and libraries. 13(1994) no.1, S.43-49

Types

  • a 226
  • b 36
  • m 18
  • s 7
  • el 2
  • r 2
  • More… Less…