Search (482 results, page 1 of 25)

  • × type_ss:"m"
  1. Manning, C.D.; Raghavan, P.; Schütze, H.: Introduction to information retrieval (2008) 0.19
    0.18575847 = product of:
      0.24767795 = sum of:
        0.15110183 = weight(_text_:vector in 4041) [ClassicSimilarity], result of:
          0.15110183 = score(doc=4041,freq=6.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.4929133 = fieldWeight in 4041, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.03125 = fieldNorm(doc=4041)
        0.081022434 = weight(_text_:space in 4041) [ClassicSimilarity], result of:
          0.081022434 = score(doc=4041,freq=4.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.3261486 = fieldWeight in 4041, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.03125 = fieldNorm(doc=4041)
        0.015553676 = product of:
          0.031107351 = sum of:
            0.031107351 = weight(_text_:model in 4041) [ClassicSimilarity], result of:
              0.031107351 = score(doc=4041,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.16993658 = fieldWeight in 4041, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4041)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Inhalt: Boolean retrieval - The term vocabulary & postings lists - Dictionaries and tolerant retrieval - Index construction - Index compression - Scoring, term weighting & the vector space model - Computing scores in a complete search system - Evaluation in information retrieval - Relevance feedback & query expansion - XML retrieval - Probabilistic information retrieval - Language models for information retrieval - Text classification & Naive Bayes - Vector space classification - Support vector machines & machine learning on documents - Flat clustering - Hierarchical clustering - Matrix decompositions & latent semantic indexing - Web search basics - Web crawling and indexes - Link analysis Vgl. die digitale Fassung unter: http://nlp.stanford.edu/IR-book/pdf/irbookprint.pdf.
  2. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (2005) 0.13
    0.12774989 = product of:
      0.25549978 = sum of:
        0.17447734 = weight(_text_:vector in 7) [ClassicSimilarity], result of:
          0.17447734 = score(doc=7,freq=8.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.5691672 = fieldWeight in 7, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.03125 = fieldNorm(doc=7)
        0.081022434 = weight(_text_:space in 7) [ClassicSimilarity], result of:
          0.081022434 = score(doc=7,freq=4.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.3261486 = fieldWeight in 7, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.03125 = fieldNorm(doc=7)
      0.5 = coord(2/4)
    
    Content
    Inhalt: Introduction Document File Preparation - Manual Indexing - Information Extraction - Vector Space Modeling - Matrix Decompositions - Query Representations - Ranking and Relevance Feedback - Searching by Link Structure - User Interface - Book Format Document File Preparation Document Purification and Analysis - Text Formatting - Validation - Manual Indexing - Automatic Indexing - Item Normalization - Inverted File Structures - Document File - Dictionary List - Inversion List - Other File Structures Vector Space Models Construction - Term-by-Document Matrices - Simple Query Matching - Design Issues - Term Weighting - Sparse Matrix Storage - Low-Rank Approximations Matrix Decompositions QR Factorization - Singular Value Decomposition - Low-Rank Approximations - Query Matching - Software - Semidiscrete Decomposition - Updating Techniques Query Management Query Binding - Types of Queries - Boolean Queries - Natural Language Queries - Thesaurus Queries - Fuzzy Queries - Term Searches - Probabilistic Queries Ranking and Relevance Feedback Performance Evaluation - Precision - Recall - Average Precision - Genetic Algorithms - Relevance Feedback Searching by Link Structure HITS Method - HITS Implementation - HITS Summary - PageRank Method - PageRank Adjustments - PageRank Implementation - PageRank Summary User Interface Considerations General Guidelines - Search Engine Interfaces - Form Fill-in - Display Considerations - Progress Indication - No Penalties for Error - Results - Test and Retest - Final Considerations Further Reading
    LCSH
    Vector spaces
    Subject
    Vector spaces
  3. Ceri, S.; Bozzon, A.; Brambilla, M.; Della Valle, E.; Fraternali, P.; Quarteroni, S.: Web Information Retrieval (2013) 0.12
    0.118072405 = product of:
      0.15742987 = sum of:
        0.08723867 = weight(_text_:vector in 1082) [ClassicSimilarity], result of:
          0.08723867 = score(doc=1082,freq=2.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.2845836 = fieldWeight in 1082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.03125 = fieldNorm(doc=1082)
        0.05729151 = weight(_text_:space in 1082) [ClassicSimilarity], result of:
          0.05729151 = score(doc=1082,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.23062189 = fieldWeight in 1082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.03125 = fieldNorm(doc=1082)
        0.012899691 = product of:
          0.025799382 = sum of:
            0.025799382 = weight(_text_:22 in 1082) [ClassicSimilarity], result of:
              0.025799382 = score(doc=1082,freq=2.0), product of:
                0.16670525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047605187 = queryNorm
                0.15476047 = fieldWeight in 1082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1082)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    With the proliferation of huge amounts of (heterogeneous) data on the Web, the importance of information retrieval (IR) has grown considerably over the last few years. Big players in the computer industry, such as Google, Microsoft and Yahoo!, are the primary contributors of technology for fast access to Web-based information; and searching capabilities are now integrated into most information systems, ranging from business management software and customer relationship systems to social networks and mobile phone applications. Ceri and his co-authors aim at taking their readers from the foundations of modern information retrieval to the most advanced challenges of Web IR. To this end, their book is divided into three parts. The first part addresses the principles of IR and provides a systematic and compact description of basic information retrieval techniques (including binary, vector space and probabilistic models as well as natural language search processing) before focusing on its application to the Web. Part two addresses the foundational aspects of Web IR by discussing the general architecture of search engines (with a focus on the crawling and indexing processes), describing link analysis methods (specifically Page Rank and HITS), addressing recommendation and diversification, and finally presenting advertising in search (the main source of revenues for search engines). The third and final part describes advanced aspects of Web search, each chapter providing a self-contained, up-to-date survey on current Web research directions. Topics in this part include meta-search and multi-domain search, semantic search, search in the context of multimedia data, and crowd search. The book is ideally suited to courses on information retrieval, as it covers all Web-independent foundational aspects. Its presentation is self-contained and does not require prior background knowledge. It can also be used in the context of classic courses on data management, allowing the instructor to cover both structured and unstructured data in various formats. Its classroom use is facilitated by a set of slides, which can be downloaded from www.search-computing.org.
    Date
    16.10.2013 19:22:44
  4. Readings in information retrieval (1997) 0.11
    0.109282956 = product of:
      0.1457106 = sum of:
        0.076333836 = weight(_text_:vector in 2080) [ClassicSimilarity], result of:
          0.076333836 = score(doc=2080,freq=2.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.24901065 = fieldWeight in 2080, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2080)
        0.050130073 = weight(_text_:space in 2080) [ClassicSimilarity], result of:
          0.050130073 = score(doc=2080,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.20179415 = fieldWeight in 2080, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2080)
        0.019246692 = product of:
          0.038493384 = sum of:
            0.038493384 = weight(_text_:model in 2080) [ClassicSimilarity], result of:
              0.038493384 = score(doc=2080,freq=4.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.2102858 = fieldWeight in 2080, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2080)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    JOYCE, T. u. R.M. NEEDHAM: The thesaurus approach to information retrieval; LUHN, H.P.: The automatic derivation of information retrieval encodements from machine-readable texts; DOYLE, L.B.: Indexing and abstracting by association. Part 1; MARON, M.E. u. J.L. KUHNS: On relevance, probabilistic indexing and information retrieval; CLEVERDON, C.W.: The Cranfield test on index language devices; SALTON, G. u. M.E. LESK: Computer evaluation of indexing and text processing; HUTCHINS, W.J.: The concept of 'aboutness' in subject indexing; CLEVERDON, C.W. u. J. MILLS: The testing of index language devices; FOSKETT, D.J.: Thesaurus; DANIELS, P.J. u.a.: Using problem structures for driving human-computer dialogues; SARACEVIC, T.: Relevance: a review of and a framwork for thinking on the notion in information science; SARACEVIC, T. u.a. A study of information seeking and retrieving: I. Background and methodology; COOPER, W.S.: On selecting a measure of retrieval effectiveness, revisited; TAGEU-SUTCLIFFE, J.: The pragmatics of information retrieval experimentation, revisited; KEEN, E.M.: Presenting results of experimental retrieval comparisons; LANCASTER, F.W.: MEDLARS: report on the evaluation of its operating efficiency; HARMAN, D.K.: The TREC conferences; COOPER, W.S.: Getting beyond Boole; RIJSBERGEN, C.J. van: A non-classical logic for information retrieval; SALTON, G. u.a.: A vector space model for automatic indexing; ROBERTSON, S.E.: The probability ranking principle in IR; TURTLE, H. u. W.B. CROFT: Inference networks for document retrieval; BELKIN, N.J. u.a.: Ask for information retrieval: Part 1. Background and theory; PORTER, M.F.: Am algortihm for suffix stripping; SALTON, G. u. C. BUCKLEY: Term-weighting approaches in automatic text retrieval; SPRACK JONES, K.: Search term relevance weighting given little relevance information; CROFT, W.B. u. D.J. HARPER: Using probabilistic models of document retrieval without relevance information; ROBERTSON, S.E. u. S. WALKER: Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval; SALTON, G. u. C. BUCKLEY: Improving retrieval performance by relevance feedback; GRIFFITHS, A. u.a.: Using interdocument similarity information in document retrieval systems; SALTON, G. u. M.J. McGILL: The SMART and SIRE experimental retrieval systems; FOX, E.A. u. R.K. FRANCE: Architecture of an expert system for composite analysis, representation, and retrieval; HARMAN, D.: User-friendly systems instead of user-friendly front ends; WALKER, S.: The Okapi online catalogue research projects; CALLAN, J. u.a.: TREC and TIPSTER experiments with INQUERY; McCUNE, B. u.a.: RUBRIC: a system for rule-based information retrieval; TENOPIR, C. u. P. CAHN: TARGET and FREESTYLE: DIALOG and Mead join the relevance ranks; AGOSTI, M. u.a.: A hypertext environment for interacting with large databases; HULL, D.A. u. G. GREFENSTETTE: Querying across languages: a dictionary-based approach to multilingual information retrieval; SALTON, G. u.a.: Automatic analysis, theme generation, and summarization of machine-readable texts; SPARCK JONES, K. u.a.: Experiments in spoken document retrieval; ZHANG, H.J. u.a.: Video parsing, retrieval and browsing: an integrated and cantent-based solution; BIEBRICHER, N. u.a.: The automatic indexing system AIR/PHYS: from research to application; STRZALKOWSKI, T.: Robust text processing in automated information retrieval; HAYES, P.J. u.a.: A news story categorization system; RAU, L.F.: Conceptual information extraction and retrieval from natural language input; MARSH, E.: A production rule system for message summarisation; JOHNSON, F.C. u.a.: The application of linguistic processing to automatic abstract generation; SWANSON, D.R.: Historical note: information retrieval and the future of an illusion
  5. Computational information retrieval (2001) 0.11
    0.10839763 = product of:
      0.21679527 = sum of:
        0.130858 = weight(_text_:vector in 4167) [ClassicSimilarity], result of:
          0.130858 = score(doc=4167,freq=2.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.4268754 = fieldWeight in 4167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.046875 = fieldNorm(doc=4167)
        0.08593727 = weight(_text_:space in 4167) [ClassicSimilarity], result of:
          0.08593727 = score(doc=4167,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.34593284 = fieldWeight in 4167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.046875 = fieldNorm(doc=4167)
      0.5 = coord(2/4)
    
    Abstract
    This volume contains selected papers that focus on the use of linear algebra, computational statistics, and computer science in the development of algorithms and software systems for text retrieval. Experts in information modeling and retrieval share their perspectives on the design of scalable but precise text retrieval systems, revealing many of the challenges and obstacles that mathematical and statistical models must overcome to be viable for automated text processing. This very useful proceedings is an excellent companion for courses in information retrieval, applied linear algebra, and applied statistics. Computational Information Retrieval provides background material on vector space models for text retrieval that applied mathematicians, statisticians, and computer scientists may not be familiar with. For graduate students in these areas, several research questions in information modeling are exposed. In addition, several case studies concerning the efficacy of the popular Latent Semantic Analysis (or Indexing) approach are provided.
  6. Survey of text mining : clustering, classification, and retrieval (2004) 0.09
    0.09033136 = product of:
      0.18066272 = sum of:
        0.10904834 = weight(_text_:vector in 804) [ClassicSimilarity], result of:
          0.10904834 = score(doc=804,freq=2.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.3557295 = fieldWeight in 804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
        0.07161439 = weight(_text_:space in 804) [ClassicSimilarity], result of:
          0.07161439 = score(doc=804,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.28827736 = fieldWeight in 804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
      0.5 = coord(2/4)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
  7. Cross-language information retrieval (1998) 0.09
    0.089183114 = product of:
      0.11891082 = sum of:
        0.05452417 = weight(_text_:vector in 6299) [ClassicSimilarity], result of:
          0.05452417 = score(doc=6299,freq=2.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.17786475 = fieldWeight in 6299, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
        0.050639022 = weight(_text_:space in 6299) [ClassicSimilarity], result of:
          0.050639022 = score(doc=6299,freq=4.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.20384288 = fieldWeight in 6299, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
        0.013747636 = product of:
          0.027495272 = sum of:
            0.027495272 = weight(_text_:model in 6299) [ClassicSimilarity], result of:
              0.027495272 = score(doc=6299,freq=4.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.15020414 = fieldWeight in 6299, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
    Footnote
    Christian Fluhr at al (DIST/SMTI, France) outline the EMIR (European Multilingual Information Retrieval) and ESPRIT projects. They found that using SYSTRAN to machine translate queries and to access material from various multilingual databases produced less relevant results than a method referred to as 'multilingual reformulation' (the mechanics of which are only hinted at). An interesting technique is Latent Semantic Indexing (LSI), described by Michael Littman et al (Brown University) and, most clearly, by David Evans et al (Carnegie Mellon University). LSI involves creating matrices of documents and the terms they contain and 'fitting' related documents into a reduced matrix space. This effectively allows queries to be mapped onto a common semantic representation of the documents. Eugenio Picchi and Carol Peters (Pisa) report on a procedure to create links between translation equivalents in an Italian-English parallel corpus. The links are used to construct parallel linguistic contexts in real-time for any term or combination of terms that is being searched for in either language. Their interest is primarily lexicographic but they plan to apply the same procedure to comparable corpora, i.e. to texts which are not translations of each other but which share the same domain. Kiyoshi Yamabana et al (NEC, Japan) address the issue of how to disambiguate between alternative translations of query terms. Their DMAX (double maximise) method looks at co-occurrence frequencies between both source language words and target language words in order to arrive at the most probable translation. The statistical data for the decision are derived, not from the translation texts but independently from monolingual corpora in each language. An interactive user interface allows the user to influence the selection of terms during the matching process. Denis Gachot et al (SYSTRAN) describe the SYSTRAN NLP browser, a prototype tool which collects parsing information derived from a text or corpus previously translated with SYSTRAN. The user enters queries into the browser in either a structured or free form and receives grammatical and lexical information about the source text and/or its translation.
    The retrieved output from a query including the phrase 'big rockets' may be, for instance, a sentence containing 'giant rocket' which is semantically ranked above 'military ocket'. David Hull (Xerox Research Centre, Grenoble) describes an implementation of a weighted Boolean model for Spanish-English CLIR. Users construct Boolean-type queries, weighting each term in the query, which is then translated by an on-line dictionary before being applied to the database. Comparisons with the performance of unweighted free-form queries ('vector space' models) proved encouraging. Two contributions consider the evaluation of CLIR systems. In order to by-pass the time-consuming and expensive process of assembling a standard collection of documents and of user queries against which the performance of an CLIR system is manually assessed, Páriac Sheridan et al (ETH Zurich) propose a method based on retrieving 'seed documents'. This involves identifying a unique document in a database (the 'seed document') and, for a number of queries, measuring how fast it is retrieved. The authors have also assembled a large database of multilingual news documents for testing purposes. By storing the (fairly short) documents in a structured form tagged with descriptor codes (e.g. for topic, country and area), the test suite is easily expanded while remaining consistent for the purposes of testing. Douglas Ouard and Bonne Dorr (University of Maryland) describe an evaluation methodology which appears to apply LSI techniques in order to filter and rank incoming documents designed for testing CLIR systems. The volume provides the reader an excellent overview of several projects in CLIR. It is well supported with references and is intended as a secondary text for researchers and practitioners. It highlights the need for a good, general tutorial introduction to the field."
  8. Crawford, S.Y.; Hurd, J.M.; Weller, A.C.: From print to electronic : the transformation of scientific communication (1997) 0.06
    0.06373954 = product of:
      0.12747908 = sum of:
        0.100260146 = weight(_text_:space in 2368) [ClassicSimilarity], result of:
          0.100260146 = score(doc=2368,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.4035883 = fieldWeight in 2368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2368)
        0.027218932 = product of:
          0.054437865 = sum of:
            0.054437865 = weight(_text_:model in 2368) [ClassicSimilarity], result of:
              0.054437865 = score(doc=2368,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.29738903 = fieldWeight in 2368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2368)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    How have technology and socioeconomics impacted on the scientific communications system? Using the baseline model developed by William Garvey and Belver Griffith, the author examined 3 fast-moving research aereas: the human genome project, space sciences, anf high energy physics. In the age of digital libraries, a network-based information infrastructure, and 'Bigger Science', what are the implications for informal communication, publishing, peer review, vast datasets shared by international groups of investigators, and other elements of the system? Based on findings in the 3 specialities, outcome models are projected on electronic versions of paper-based communication, research results refereed or unrefereed, electronic invisible colleges, and organizational changes for the information professions
  9. Ranganathan, S.R.: Classification and communication (2006) 0.06
    0.061417304 = product of:
      0.12283461 = sum of:
        0.100260146 = weight(_text_:space in 1469) [ClassicSimilarity], result of:
          0.100260146 = score(doc=1469,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.4035883 = fieldWeight in 1469, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1469)
        0.022574458 = product of:
          0.045148917 = sum of:
            0.045148917 = weight(_text_:22 in 1469) [ClassicSimilarity], result of:
              0.045148917 = score(doc=1469,freq=2.0), product of:
                0.16670525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047605187 = queryNorm
                0.2708308 = fieldWeight in 1469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1469)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Inhalt: Part I ---Classification and Its Evolution 11. First sense --Primitive use 12. Second sense---Common use 13. Third sense--- Library classification 14. Field of knowledge 15. Enumerative classification 16. Analytico-synthetic classification 17. Uses of analytico-synthetic classification 18. Depth -classification --Confession of a faith Part 2---Communication 21. Co-operative living 22. Communication and language 23. Commercial contact 24. Political understanding 25. Literary exchange 26. Spiritual communion 27. Cultural concord 28. Intellectual team -work Part 3---Classification and Its Future 31. Domains in communication 32. Domain of classification 33. Time-and Space-Facets 34. Preliminary schedules 35. Energy-Facet 36. Matter-Facet 37. Personality -Facet 38. Research and Organisation
  10. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.05
    0.054513875 = product of:
      0.10902775 = sum of:
        0.092530586 = weight(_text_:vector in 6) [ClassicSimilarity], result of:
          0.092530586 = score(doc=6,freq=4.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.3018465 = fieldWeight in 6, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6)
        0.016497165 = product of:
          0.03299433 = sum of:
            0.03299433 = weight(_text_:model in 6) [ClassicSimilarity], result of:
              0.03299433 = score(doc=6,freq=4.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.18024497 = fieldWeight in 6, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=6)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Inhalt: Chapter 1. Introduction to Web Search Engines: 1.1 A Short History of Information Retrieval - 1.2 An Overview of Traditional Information Retrieval - 1.3 Web Information Retrieval Chapter 2. Crawling, Indexing, and Query Processing: 2.1 Crawling - 2.2 The Content Index - 2.3 Query Processing Chapter 3. Ranking Webpages by Popularity: 3.1 The Scene in 1998 - 3.2 Two Theses - 3.3 Query-Independence Chapter 4. The Mathematics of Google's PageRank: 4.1 The Original Summation Formula for PageRank - 4.2 Matrix Representation of the Summation Equations - 4.3 Problems with the Iterative Process - 4.4 A Little Markov Chain Theory - 4.5 Early Adjustments to the Basic Model - 4.6 Computation of the PageRank Vector - 4.7 Theorem and Proof for Spectrum of the Google Matrix Chapter 5. Parameters in the PageRank Model: 5.1 The a Factor - 5.2 The Hyperlink Matrix H - 5.3 The Teleportation Matrix E Chapter 6. The Sensitivity of PageRank; 6.1 Sensitivity with respect to alpha - 6.2 Sensitivity with respect to H - 6.3 Sensitivity with respect to vT - 6.4 Other Analyses of Sensitivity - 6.5 Sensitivity Theorems and Proofs Chapter 7. The PageRank Problem as a Linear System: 7.1 Properties of (I - alphaS) - 7.2 Properties of (I - alphaH) - 7.3 Proof of the PageRank Sparse Linear System Chapter 8. Issues in Large-Scale Implementation of PageRank: 8.1 Storage Issues - 8.2 Convergence Criterion - 8.3 Accuracy - 8.4 Dangling Nodes - 8.5 Back Button Modeling
    Chapter 9. Accelerating the Computation of PageRank: 9.1 An Adaptive Power Method - 9.2 Extrapolation - 9.3 Aggregation - 9.4 Other Numerical Methods Chapter 10. Updating the PageRank Vector: 10.1 The Two Updating Problems and their History - 10.2 Restarting the Power Method - 10.3 Approximate Updating Using Approximate Aggregation - 10.4 Exact Aggregation - 10.5 Exact vs. Approximate Aggregation - 10.6 Updating with Iterative Aggregation - 10.7 Determining the Partition - 10.8 Conclusions Chapter 11. The HITS Method for Ranking Webpages: 11.1 The HITS Algorithm - 11.2 HITS Implementation - 11.3 HITS Convergence - 11.4 HITS Example - 11.5 Strengths and Weaknesses of HITS - 11.6 HITS's Relationship to Bibliometrics - 11.7 Query-Independent HITS - 11.8 Accelerating HITS - 11.9 HITS Sensitivity Chapter 12. Other Link Methods for Ranking Webpages: 12.1 SALSA - 12.2 Hybrid Ranking Methods - 12.3 Rankings based on Traffic Flow Chapter 13. The Future of Web Information Retrieval: 13.1 Spam - 13.2 Personalization - 13.3 Clustering - 13.4 Intelligent Agents - 13.5 Trends and Time-Sensitive Search - 13.6 Privacy and Censorship - 13.7 Library Classification Schemes - 13.8 Data Fusion Chapter 14. Resources for Web Information Retrieval: 14.1 Resources for Getting Started - 14.2 Resources for Serious Study Chapter 15. The Mathematics Guide: 15.1 Linear Algebra - 15.2 Perron-Frobenius Theory - 15.3 Markov Chains - 15.4 Perron Complementation - 15.5 Stochastic Complementation - 15.6 Censoring - 15.7 Aggregation - 15.8 Disaggregation
  11. Bruce, H.: ¬The user's view of the Internet (2002) 0.05
    0.051849224 = product of:
      0.10369845 = sum of:
        0.06793937 = weight(_text_:space in 4344) [ClassicSimilarity], result of:
          0.06793937 = score(doc=4344,freq=20.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.2734839 = fieldWeight in 4344, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.01171875 = fieldNorm(doc=4344)
        0.035759076 = sum of:
          0.02608431 = weight(_text_:model in 4344) [ClassicSimilarity], result of:
            0.02608431 = score(doc=4344,freq=10.0), product of:
              0.1830527 = queryWeight, product of:
                3.845226 = idf(docFreq=2569, maxDocs=44218)
                0.047605187 = queryNorm
              0.14249617 = fieldWeight in 4344, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.845226 = idf(docFreq=2569, maxDocs=44218)
                0.01171875 = fieldNorm(doc=4344)
          0.009674768 = weight(_text_:22 in 4344) [ClassicSimilarity], result of:
            0.009674768 = score(doc=4344,freq=2.0), product of:
              0.16670525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047605187 = queryNorm
              0.058035173 = fieldWeight in 4344, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01171875 = fieldNorm(doc=4344)
      0.5 = coord(2/4)
    
    Footnote
    Chapter 2 (Technology and People) focuses an several theories of technological acceptance and diffusion. Unfortunately, Bruce's presentation is somewhat confusing as he moves from one theory to next, never quite connecting them into a logical sequence or coherent whole. Two theories are of particular interest to Bruce: the Theory of Diffusion of Innovations and the Theory of Planned Behavior. The Theory of Diffusion of Innovations is an "information-centric view of technology acceptance" in which technology adopters are placed in the information flows of society from which they learn about innovations and "drive innovation adoption decisions" (p. 20). The Theory of Planned Behavior maintains that the "performance of a behavior is a joint function of intentions and perceived behavioral control" (i.e., how muck control a person thinks they have) (pp. 22-23). Bruce combines these two theories to form the basis for the Technology Acceptance Model. This model posits that "an individual's acceptance of information technology is based an beliefs, attitudes, intentions, and behaviors" (p. 24). In all these theories and models echoes a recurring theme: "individual perceptions of the innovation or technology are critical" in terms of both its characteristics and its use (pp. 24-25). From these, in turn, Bruce derives a predictive theory of the role personal perceptions play in technology adoption: Personal Innovativeness of Information Technology Adoption (PIITA). Personal inventiveness is defined as "the willingness of an individual to try out any new information technology" (p. 26). In general, the PIITA theory predicts that information technology will be adopted by individuals that have a greater exposure to mass media, rely less an the evaluation of information technology by others, exhibit a greater ability to cope with uncertainty and take risks, and requires a less positive perception of an information technology prior to its adoption. Chapter 3 (A Focus an Usings) introduces the User-Centered Paradigm (UCP). The UCP is characteristic of the shift of emphasis from technology to users as the driving force behind technology and research agendas for Internet development [for a dissenting view, see Andrew Dillion's (2003) challenge to the utility of user-centerness for design guidance]. It entails the "broad acceptance of the user-oriented perspective across a range of disciplines and professional fields," such as business, education, cognitive engineering, and information science (p. 34).
    The UCP's effect an business practices is focused mainly in the management and marketing areas. Marketing experienced a shift from "product-oriented operations" with its focus an "selling the products' features" and customer contact only at the point of sale toward more service-Centered business practice ("customer Jemand orientation") and the development of one-to-one customer relationships (pp. 35-36). For management, the adoption of the UCP caused a shift from "mechanistic, bureaucratic, top-down organizational structures" to "flatter, inclusive, and participative" ones (p. 37). In education, practice shifted from the teachercentered model where the "teacher is responsible for and makes all the decisions related to the learning environment" to a learnercentered model where the student is "responsible for his or her own learning" and the teacher focuses an "matching learning events to the individual skills, aptitudes, and interests of the individual learner" (pp. 38-39). Cognitive engineering saw the rise of "user-Centered design" and human factors that were concerned with applying "scientific knowledge of humans to the design of man-machine interface systems" (p. 44). The UCP had a great effect an Information Science in the "design of information systems" (p. 47). Previous to UCP's explicit proposed by Brenda Dervin and M. Nilan in 1986, systems design was dominated by the "physical of system oriented paradigm" (p. 48). The physical paradigm held a positivistic and materialistic view of technology and (passive) human interaction as exemplified by the 1953 Cranfield tests of information retrieval mechanisms. Instead, the UCP focuses an "users rather than systems" by making the perceptions of individual information users the "centerpiece consideration for information service and system design" (pp. 47-48). Bruce briefly touches an the various schools of thought within user-oriented paradigm, such as the cognitive/self studies approach with its emphasis is an an individual's knowledge structures or model of the world [e.g., Belkin (1990)], the cognitve/context studies approach that focuses an "context in explaining variations in information behavior" [e.g., Savolainen (1995) and Dervin's (1999) sensemaking], and the social constructionism/discourse analytic theory with its focus an that language, not mental/knowledge constructs, as the primary shaper of the world as a system of intersubjective meanings [e.g., Talja 1996], (pp. 53-54). Drawing from the rich tradition of user oriented research, Bruce attempts to gain a metatheoretical understanding of the Internet as a phenomena by combining Dervin's (1996) "micromoments of human usings" with the French philosopher Bruno Latour's (1999) "conception of Circulating reference" to form what 1 term the Metatheory of Circulating Usings (pp. ix, 56, 60). According to Bruce, Latour's concept is designed to bridge "the gap between mind and object" by engaging in a "succession of finely grained transformations that construct and transfer truth about the object" through a chain of "microtranslations" from "matter to form," thereby connecting mind and object (p. 56). The connection works as long as the chain remains unbroken. The nature of this chain of "information producing translations" are such that as one moves away from the object, one experiences a "reduction" of the object's "locality, particularity, materiality, multiplicity and continuity," while simultaneously gaining the "amplification" of its "compatibility, standardization, text, calculation, circulation, and relative universality" (p. 57).
    Bruce begins Chapter 5 (The Users' View of the Internet) by pointing out that the Internet not only exists as a physical entity of hardware, software, and networked connectivity, but also as a mental representation or knowledge structure constructed by users based an their usings. These knowledge structures or constructs "allow people to interpret and make sense of things" by functioning as a link between the new unknown thing with known thing(s) (p. 158). The knowledge structures or using constructs are continually evolving as people use the Internet over time, and represent the user's view of the Internet. To capture the users' view of the Internet from the research literature, Bruce uses his Metatheory of Circulating Usings. He recapitulates the theory, casting it more closely to the study of Internet use than previously. Here the reduction component provides a more detailed "understanding of the individual users involved in the micromoment of Internet using" while simultaneously the amplification component increases our understanding of the "generalized construct of the Internet" (p. 158). From this point an Bruce presents a relatively detail users' view of the Internet. He starts with examining Internet usings, which is composed of three parts: using space, using literacies, and Internet space. According to Bruce, using space is a using horizon likened to a "sphere of influence," comfortable and intimate, in which an individual interacts with the Internet successfully (p. 164). It is a "composite of individual (professional nonwork) constructs of Internet utility" (p. 165). Using literacies are the groups of skills or tools that an individual must acquire for successful interaction with the Internet. These literacies serve to link the using space with the Internet space. They are usually self-taught and form individual standards of successful or satisfactory usings that can be (and often are) at odds with the standards of the information profession. Internet space is, according to Bruce, a user construct that perceives the Internet as a physical, tangible place separate from using space. Bruce concludes that the user's view of the Internet explains six "principles" (p. 173). "Internet using is proof of concept" and occurs in contexts; using space is created through using frequency, individuals use literacies to explore and utilize Internet space, Internet space "does not require proof of concept, and is often influence by the perceptions and usings of others," and "the user's view of the Internet is upbeat and optimistic" (pp. 173-175). He ends with a section describing who are the Internet stakeholders. Bruce defines them as Internet hardware/software developers, Professional users practicing their profession in both familiar and transformational ways, and individuals using the Internet "for the tasks and pleasures of everyday life" (p. 176).
  12. Boczkowski, P.; Mitchelstein, E.: ¬The digital environment : How we live, learn, work, and play now (2021) 0.05
    0.04696106 = product of:
      0.09392212 = sum of:
        0.081022434 = weight(_text_:space in 1003) [ClassicSimilarity], result of:
          0.081022434 = score(doc=1003,freq=4.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.3261486 = fieldWeight in 1003, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.03125 = fieldNorm(doc=1003)
        0.012899691 = product of:
          0.025799382 = sum of:
            0.025799382 = weight(_text_:22 in 1003) [ClassicSimilarity], result of:
              0.025799382 = score(doc=1003,freq=2.0), product of:
                0.16670525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047605187 = queryNorm
                0.15476047 = fieldWeight in 1003, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1003)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Increasingly we live through our personal screens; we work, play, socialize, and learn digitally. The shift to remote everything during the pandemic was another step in a decades-long march toward the digitization of everyday life made possible by innovations in media, information, and communication technology. In The Digital Environment, Pablo Boczkowski and Eugenia Mitchelstein offer a new way to understand the role of the digital in our daily lives, calling on us to turn our attention from our discrete devices and apps to the array of artifacts and practices that make up the digital environment that envelops every aspect of our social experience. Boczkowski and Mitchelstein explore a series of issues raised by the digital takeover of everyday life, drawing on interviews with a variety of experts. They show how existing inequities of gender, race, ethnicity, education, and class are baked into the design and deployment of technology, and describe emancipatory practices that counter this--including the use of Twitter as a platform for activism through such hashtags as #BlackLivesMatter and #MeToo. They discuss the digitization of parenting, schooling, and dating--noting, among other things, that today we can both begin and end relationships online. They describe how digital media shape our consumption of sports, entertainment, and news, and consider the dynamics of political campaigns, disinformation, and social activism. Finally, they report on developments in three areas that will be key to our digital future: data science, virtual reality, and space exploration.
    Content
    1. Three Environments, One Life -- Part I: Foundations -- 2. Mediatization -- 3. Algorithms -- 4. Race and Ethnicity -- 5. Gender -- Part II: Institutions -- 6. Parenting -- 7. Schooling -- 8. Working -- 9. Dating -- Part III: Leisure -- 10. Sports -- 11. Televised Entertainment -- 12. News -- Part IV: Politics -- 13. Misinformation and Disinformation -- 14. Electoral Campaigns -- 15. Activism -- Part V: Innovations -- 16. Data Science -- 17. Virtual Reality -- 18. Space Exploration -- 19. Bricks and Cracks in the Digital Environment
    Date
    22. 6.2023 18:25:18
  13. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (1999) 0.05
    0.046265293 = product of:
      0.18506117 = sum of:
        0.18506117 = weight(_text_:vector in 5777) [ClassicSimilarity], result of:
          0.18506117 = score(doc=5777,freq=4.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.603693 = fieldWeight in 5777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.046875 = fieldNorm(doc=5777)
      0.25 = coord(1/4)
    
    LCSH
    Vector spaces
    Subject
    Vector spaces
  14. Virgilio, R. De; Cappellari, P.; Maccioni, A.; Torlone, R.: Path-oriented keyword search query over RDF (2012) 0.05
    0.045528244 = product of:
      0.09105649 = sum of:
        0.07161439 = weight(_text_:space in 429) [ClassicSimilarity], result of:
          0.07161439 = score(doc=429,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.28827736 = fieldWeight in 429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.0390625 = fieldNorm(doc=429)
        0.019442094 = product of:
          0.03888419 = sum of:
            0.03888419 = weight(_text_:model in 429) [ClassicSimilarity], result of:
              0.03888419 = score(doc=429,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.21242073 = fieldWeight in 429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=429)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    We are witnessing a smooth evolution of the Web from a worldwide information space of linked documents to a global knowledge base, where resources are identified by means of uniform resource identifiers (URIs, essentially string identifiers) and are semantically described and correlated through resource description framework (RDF, a metadata data model) statements. With the size and availability of data constantly increasing (currently around 7 billion RDF triples and 150 million RDF links), a fundamental problem lies in the difficulty users face to find and retrieve the information they are interested in. In general, to access semantic data, users need to know the organization of data and the syntax of a specific query language (e.g., SPARQL or variants thereof). Clearly, this represents an obstacle to information access for nonexpert users. For this reason, keyword search-based systems are increasingly capturing the attention of researchers. Recently, many approaches to keyword-based search over structured and semistructured data have been proposed]. These approaches usually implement IR strategies on top of traditional database management systems with the goal of freeing the users from having to know data organization and query languages.
  15. Challenges and opportunities for knowledge organization in the digital age : proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto (2018) 0.04
    0.043869503 = product of:
      0.087739006 = sum of:
        0.07161439 = weight(_text_:space in 4696) [ClassicSimilarity], result of:
          0.07161439 = score(doc=4696,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.28827736 = fieldWeight in 4696, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4696)
        0.016124614 = product of:
          0.032249227 = sum of:
            0.032249227 = weight(_text_:22 in 4696) [ClassicSimilarity], result of:
              0.032249227 = score(doc=4696,freq=2.0), product of:
                0.16670525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047605187 = queryNorm
                0.19345059 = fieldWeight in 4696, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4696)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The 15th International ISKO Conference has been held in Porto (Portugal) under the topic Challenges and opportunities for KO in the digital age. ISKO has been organizing biennial international conferences since 1990, in order to promote a space for debate among Knowledge Organization (KO) scholars and practitioners all over the world. The topics under discussion in the 15th International ISKO Conference are intended to cover a wide range of issues that, in a very incisive way, constitute challenges, obstacles and questions in the field of KO, but also highlight ways and open innovative perspectives for this area in a world undergoing constant change, due to the digital revolution that unavoidably moulds our society. Accordingly, the three aggregating themes, chosen to fit the proposals for papers and posters to be submitted, are as follows: 1 - Foundations and methods for KO; 2 - Interoperability towards information access; 3 - Societal challenges in KO. In addition to these themes, the inaugural session includes a keynote speech by Prof. David Bawden of City University London, entitled Supporting truth and promoting understanding: knowledge organization and the curation of the infosphere.
    Date
    17. 1.2019 17:22:18
  16. Rijsbergen, K. van: ¬The geometry of information retrieval (2004) 0.04
    0.043619335 = product of:
      0.17447734 = sum of:
        0.17447734 = weight(_text_:vector in 5459) [ClassicSimilarity], result of:
          0.17447734 = score(doc=5459,freq=2.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.5691672 = fieldWeight in 5459, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.0625 = fieldNorm(doc=5459)
      0.25 = coord(1/4)
    
    Content
    Inhalt: 1. Introduction; 2. On sets and kinds for IR; 3. Vector and Hilbert spaces; 4. Linear transformations, operators and matrices; 5. Conditional logic in IR; 6. The geometry of IR.
  17. Bolter, J.D.: Writing space : the computer, hypertext, and the history of writing (1991) 0.04
    0.042968635 = product of:
      0.17187454 = sum of:
        0.17187454 = weight(_text_:space in 8744) [ClassicSimilarity], result of:
          0.17187454 = score(doc=8744,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.6918657 = fieldWeight in 8744, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.09375 = fieldNorm(doc=8744)
      0.25 = coord(1/4)
    
  18. Bazillion, R.J.; Braun, C.L.: Academic libraries as high-tech gateways : a guide to design & space decisions (2001) 0.04
    0.042968635 = product of:
      0.17187454 = sum of:
        0.17187454 = weight(_text_:space in 294) [ClassicSimilarity], result of:
          0.17187454 = score(doc=294,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.6918657 = fieldWeight in 294, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.09375 = fieldNorm(doc=294)
      0.25 = coord(1/4)
    
  19. Knowledge: creation, organization and use : Proceedings of the 62nd Annual Meeting of the American Society for Information Science, Washington, DC, 31.10.-4.11.1999. Ed.: Larry Woods (1999) 0.04
    0.042803254 = product of:
      0.08560651 = sum of:
        0.035807196 = weight(_text_:space in 6721) [ClassicSimilarity], result of:
          0.035807196 = score(doc=6721,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.14413868 = fieldWeight in 6721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6721)
        0.049799312 = sum of:
          0.0336747 = weight(_text_:model in 6721) [ClassicSimilarity], result of:
            0.0336747 = score(doc=6721,freq=6.0), product of:
              0.1830527 = queryWeight, product of:
                3.845226 = idf(docFreq=2569, maxDocs=44218)
                0.047605187 = queryNorm
              0.18396176 = fieldWeight in 6721, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.845226 = idf(docFreq=2569, maxDocs=44218)
                0.01953125 = fieldNorm(doc=6721)
          0.016124614 = weight(_text_:22 in 6721) [ClassicSimilarity], result of:
            0.016124614 = score(doc=6721,freq=2.0), product of:
              0.16670525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047605187 = queryNorm
              0.09672529 = fieldWeight in 6721, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=6721)
      0.5 = coord(2/4)
    
    Content
    Enthält u.a. die Beiträge: AUSTIN, D.: A proposal for an International Standard Object Number works. BATEMAN, J.: Modelling the importance of end-user relevance criteria. BILAL, D.: Web search engines for children: a comparative study and performance evaluation of Yahooligans!, AskJeeves for Kids, and Super Snooper. BOROS, E., P.B. KANTOR u. D.J. NEU: Pheromonic representation of user quests by digital structures. BRADSHAW, S., K. HAMMOND: Constructing indices from citations in collections of research papers. BUDZIK, J., K. HAMMOND: Q&A: a system for the capture, organization and reuse of expertise. BUDZIK, J., K. HAMMOND: Watson: anticipating and contextualizing information needs. CHOO, C.W., B. DETLOR u. D. TURNBULL: Information seeking on the Web: an integrated model of browsing and searching. CORTEZ, E.M.: Planning and implementing a high performance knowledge base. DING, W., D. SOERGEL u. G. MARCHIONINI: Performance of visual, verbal, and combined video surrogates. DU TOIT, A.: Developing a framework for managing knowledge in enterprises. FALCONER, J.: The business pattern: a new tool for organizational knowledge capture and reuse. GOODRUM, A., A. SPINK: Visual information seeking: a study of image queries on the world wide web. HEIDORN, P.B.: The identification of index terms in natural language object descriptions. HILL, L.L., Q. ZHENG: Indirect geospatial referencing through place names in the digital library: Alexandra digital library experience with developing and implementing gazetteers. JURISICA, I., J. MYLOPOULOS u. E. YU: Using ontologies for knowledge management: an information systems perspective. KANTOR, B., E. BOROS u. B. MELAMED u.a.: The information quest: a dynamic model of user's information needs. KANTOR, P., M.H. KIM u. U. Ibraev u.a.: Estimating the number of relevant documents in enormous collections. KIM, Y., B. NORGARD U. A. CHEN u.a.: Using ordinary language in access metadata of divers types of information resources: trade classifications and numeric data. KOLLURI, V., D.P. METZLER: Knowledge guided rule learning. LARSON, R.R., C. CARSON: Information access for a digital library: Cheshire II and the Berkeley environment digital library. LEAZER, G.H., J. FURNER: Topological indices of textual identity networks. LIN, X.: Designing a visual interface for online searching. MA, Y., V.B. DIODATO: Icons as visual form of knowledge representation on the World Wide Web: a semiotic analysis.
    MACCALL, S.L., A.D. CLEVELAND U. I.E. GIBSON: Outline and preliminary evaluation of the classical digital library model. MACCALL, S.L., A.D. CLEVELAND: A relevance-based quantitative measure for Internet information retrieval evaluation. MAI, J.-E.: A postmodern theory of knowledge organization. PATRICK, T.B., M.C. SIEVERT U. J. RIES u.a.: Clustering terms in health care terminologies. PATRICK, T.B., M.C. SIEVERT U. M. POPESCU: Text indexing of images based on graphical image content. POLE, T.: Contextual classification in the Metadata Object Manager (M.O.M.). PRISS, U., E. JACOB: Utilizing faceted structures for information systems design. RORVIG, M., M.M. SMITH U. A. UEMURA: The N-gram hypothesis applied to matched sets of visualized Japanese-English technical documents. SCHAMBER, L., J. BATEMAN: Relevance criteria uses and importance: progress in development of a measurement scale. SMIRAGLIA, R.P.: Derivative bibliographic relationships among theological works. SU, L.T., H.L. CHEN: Evaluation of Web search engines by undergraduate students. TSE, T., S. VEGH U. G. MARCHIONINI u.a.: An exploratory study of video browsing user interface designs and research methodologies: effectiveness in information seeking tasks. WANG, P.: An empirical study of knowledge structures of research topics; SCULL, C. u.a.: Envisioning the Web: user expectations about the cyber-experience; WEISS, S.C.: The seamless, Web-based library: a meta site for the 21st century; DUGDALE, C.: Cooperation, coordination and cultural change for effective information management in the hybrid academic library. PRETTYMAN, M. u.a.: Electronic publication of health information in an object oriented environment. PRITCHARD, E.E.: Retrospective conversion of journal titles to online formats: which disciplines make good choices? SHARRETTS, C.W. u.a.: Electronic theses and dissertations at the University of Virginia. HAWK, W.B. u. P. WANG: Users' interaction with the World Wide Web: Problems & problem-solving. HARRIS, C. u.a. Temporal visualization for legal case histories. MARSHALL, R.: Rhetoric and policy: how is it being used in pornography and the Internet?
    WARWICK, S. u. H.I. XIE: Copyright management information in electronic forms: user compliance and modes of delivery. HOCHHEISER, H. u. B. SHNEIDERMAN: Understanding patterns of user visits to Web sites: interactive Starfield visualizations of WWW log data. GIANNINI, T.: Rethinking the reference interview: from interpersonal communication to online information process. KANTOR, P.B. u. T. SARACEVIC: Quantitative study of the value of research libraries: a foundation for the evaluation of digital libraries. MIKULECKY, P. u. J. MIKULECKA: Active tools for better knowledge dissemination. BERKEMEYER, J.: Electronic publications at national libraries: now and in the future. ZHANG, Z. u.a.: DAPHNE: a tool for distributed Web authoring and publishing. BISHOP, A.P. u.a. Information exchange networks in low-income neighborhoods: implications for community networking. ERCEGOVAC, Z.: LEArning portfolio for accessing engineering information for engineers. RENEKER, M. u.a.: Information environment of a military university campus: an exploratory study. GREENE, S. u. R. LUTZ: Data stewardship: the care and handling of named entities. NEUMANN, L.: Physical environment as a resource in information work settings. VISHIK, C. u.a.: Enterprise information space: user's view, developer's view, and market approach. SHIM, W. u. P.B. KANTOR: Evaluation of digital libraries: a DEA approach. TENOPIR, C. u. D. GREEN: Patterns of use and usage factors for online databases in academic and public libraries. TROLLEY, J.H. u. J. O'NEILL: New wine and old vessels: the evaluation and integration of Web based information in well-established resources. KANTOR, P.B. u. R. NORDLIE: Models of the behavior of people searching the Internet: a Petri net approach. TOMS, E.G. u.a.: Does genre define the shape of information? The role of form and function in user interaction with digital documents. ROSENBAUM, H.: Towards a theory of the digital information environment. WHITMIRE, E.: Undergraduates' information seeking behavior: the role of epistemological development theories and models. BREITENSTEIN, M.: From revolution to orthodoxy: an evolutionary histroy of the International Encyclopedia of Unified Science. YANCEY, T. u.a.: Lexicography without limits: a Web-based solution
    Date
    22. 6.2005 9:44:50
  20. Olsen, K.A.: ¬The Internet, the Web, and eBusiness : formalizing applications for the real world (2005) 0.04
    0.03649245 = product of:
      0.0729849 = sum of:
        0.028645756 = weight(_text_:space in 149) [ClassicSimilarity], result of:
          0.028645756 = score(doc=149,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.115310945 = fieldWeight in 149, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.015625 = fieldNorm(doc=149)
        0.04433914 = sum of:
          0.021996219 = weight(_text_:model in 149) [ClassicSimilarity], result of:
            0.021996219 = score(doc=149,freq=4.0), product of:
              0.1830527 = queryWeight, product of:
                3.845226 = idf(docFreq=2569, maxDocs=44218)
                0.047605187 = queryNorm
              0.120163314 = fieldWeight in 149, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.845226 = idf(docFreq=2569, maxDocs=44218)
                0.015625 = fieldNorm(doc=149)
          0.02234292 = weight(_text_:22 in 149) [ClassicSimilarity], result of:
            0.02234292 = score(doc=149,freq=6.0), product of:
              0.16670525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047605187 = queryNorm
              0.1340265 = fieldWeight in 149, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=149)
      0.5 = coord(2/4)
    
    Classification
    004.678 22
    DDC
    004.678 22
    Footnote
    Rez. in: JASIST 57(2006) no.14, S.1979-1980 (J.G. Williams): "The Introduction and Part I of this book presents the world of computing with a historical and philosophical overview of computers, computer applications, networks, the World Wide Web, and eBusiness based on the notion that the real world places constraints on the application of these technologies and without a formalized approach, the benefits of these technologies cannot be realized. The concepts of real world constraints and the need for formalization are used as the cornerstones for a building-block approach for helping the reader understand computing, networking, the World Wide Web, and the applications that use these technologies as well as all the possibilities that these technologies hold for the future. The author's building block approach to understanding computing, networking and application building makes the book useful for science, business, and engineering students taking an introductory computing course and for social science students who want to understand more about the social impact of computers, the Internet, and Web technology. It is useful as well for managers and designers of Web and ebusiness applications, and for the general public who are interested in understanding how these technologies may impact their lives, their jobs, and the social context in which they live and work. The book does assume some experience and terminology in using PCs and the Internet but is not intended for computer science students, although they could benefit from the philosophical basis and the diverse viewpoints presented. The author uses numerous analogies from domains outside the area of computing to illustrate concepts and points of view that make the content understandable as well as interesting to individuals without any in-depth knowledge of computing, networking, software engineering, system design, ebusiness, and Web design. These analogies include interesting real-world events ranging from the beginning of railroads, to Henry Ford's mass produced automobile, to the European Space Agency's loss of the 7 billion dollar Adriane rocket, to travel agency booking, to medical systems, to banking, to expanding democracy. The book gives the pros and cons of the possibilities offered by the Internet and the Web by presenting numerous examples and an analysis of the pros and cons of these technologies for the examples provided. The author shows, in an interesting manner, how the new economy based on the Internet and the Web affects society and business life on a worldwide basis now and how it will affect the future, and how society can take advantage of the opportunities that the Internet and the Web offer.
    Chapter 12 on "Web Presence" is a useful discussion of what it means to have a Web site that is indexed by a spider from a major Web search engine. Chapter 13 on "Mobile Computing" is very well done and gives the reader a solid basis of what is involved with mobile computing without overwhelming them with technical details. Chapter 14 discusses the difference between pull technologies and push technologies using the Web that is understandable to almost anyone who has ever used the Web. Chapters 15, 16, and 17 are for the technically stout at heart; they cover "Dynamic Web Pages," " Embedded Scripts," and "Peer-to-Peer Computing." These three chapters will tend to dampen the spirits of anyone who does not come from a technical background. Chapter 18 on "Symbolic Services-Information Providers" and chapter 19 on "OnLine Symbolic Services-Case Studies" are ideal for class discussion and students assignments as is chapter 20, "Online Retail Shopping-Physical Items." Chapter 21 presents a number of case studies on the "Technical Constraints" discussed in chapter 3 and chapter 22 presents case studies on the "Cultural Constraints" discussed in chapter 4. These case studies are not only presented in an interesting manner they focus on situations that most Web users have encountered but never really given much thought to. Chapter 24 "A Better Model?" discusses a combined "formalized/unformalized" model that might make Web applications such as banking and booking travel work better than the current models. This chapter will cause readers to think about the role of formalization and the unformalized processes that are involved in any application. Chapters 24, 25, 26, and 27 which discuss the role of "Data Exchange," "Formalized Data Exchange," "Electronic Data Interchange-EDI," and "XML" in business-to-business applications on the Web may stress the limits of the nontechnically oriented reader even though it is presented in a very understandable manner. Chapters 28, 29, 30, and 31 discuss Web services, the automated value chain, electronic market places, and outsourcing, which are of high interest to business students, businessmen, and designers of Web applications and can be skimmed by others who want to understand ebusiness but are not interested in the details. In Part 5, the chapters 32, 33, and 34 on "Interfacing with the Web of the Future," "A Disruptive Technology," "Virtual Businesses," and "Semantic Web," were, for me, who teaches courses in IT and develops ebusiness applications the most interesting chapters in the book because they provided some useful insights about what is likely to happen in the future. The summary in part 6 of the book is quite well done and I wish I had read it before I started reading the other parts of the book.

Languages

  • e 276
  • d 193
  • m 5
  • de 1
  • es 1
  • i 1
  • pl 1
  • More… Less…

Types

  • s 107
  • i 15
  • el 5
  • b 3
  • d 1
  • n 1
  • u 1
  • More… Less…

Themes

Subjects

Classifications