Search (1398 results, page 1 of 70)

  • × language_ss:"e"
  • × year_i:[2000 TO 2010}
  1. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.15
    0.15028802 = product of:
      0.25048003 = sum of:
        0.060036086 = product of:
          0.18010825 = sum of:
            0.18010825 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.18010825 = score(doc=2918,freq=2.0), product of:
                0.32046703 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037799787 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.18010825 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.18010825 = score(doc=2918,freq=2.0), product of:
            0.32046703 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037799787 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.0103356745 = product of:
          0.031007024 = sum of:
            0.031007024 = weight(_text_:29 in 2918) [ClassicSimilarity], result of:
              0.031007024 = score(doc=2918,freq=2.0), product of:
                0.13296783 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037799787 = queryNorm
                0.23319192 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
      0.6 = coord(3/5)
    
    Date
    29. 8.2009 21:15:48
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.15
    0.15023223 = product of:
      0.25038704 = sum of:
        0.060036086 = product of:
          0.18010825 = sum of:
            0.18010825 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.18010825 = score(doc=562,freq=2.0), product of:
                0.32046703 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037799787 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.18010825 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18010825 = score(doc=562,freq=2.0), product of:
            0.32046703 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037799787 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.010242699 = product of:
          0.030728096 = sum of:
            0.030728096 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.030728096 = score(doc=562,freq=2.0), product of:
                0.13236842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037799787 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.11
    0.112067364 = product of:
      0.2801684 = sum of:
        0.0700421 = product of:
          0.2101263 = sum of:
            0.2101263 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.2101263 = score(doc=306,freq=2.0), product of:
                0.32046703 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037799787 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.2101263 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.2101263 = score(doc=306,freq=2.0), product of:
            0.32046703 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037799787 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  4. Mudge, S.; Hoek, D.J.: Describing jazz, blues, and popular 78 rpm sound recordings : suggestions and guidelines (2000) 0.08
    0.07935123 = product of:
      0.19837807 = sum of:
        0.18631978 = weight(_text_:discs in 965) [ClassicSimilarity], result of:
          0.18631978 = score(doc=965,freq=2.0), product of:
            0.30176762 = queryWeight, product of:
              7.983315 = idf(docFreq=40, maxDocs=44218)
              0.037799787 = queryNorm
            0.617428 = fieldWeight in 965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.983315 = idf(docFreq=40, maxDocs=44218)
              0.0546875 = fieldNorm(doc=965)
        0.012058287 = product of:
          0.03617486 = sum of:
            0.03617486 = weight(_text_:29 in 965) [ClassicSimilarity], result of:
              0.03617486 = score(doc=965,freq=2.0), product of:
                0.13296783 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037799787 = queryNorm
                0.27205724 = fieldWeight in 965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=965)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Since 78 rpm sound recordings of jazz, blues, and popular music are today a specialized medium, they receive limited attention in cataloging rules and guides. The cataloging of 78 rpm discs at Indiana University's Archives of Traditional Music is based on established standards; nevertheless, certain local decisions are necessary when general rules are not clear. The Archives' decisions related to the description of their 78 rpm collections are explained and presented with examples in MARC format, and issues of access related to the choice of main entry are also covered
    Source
    Cataloging and classification quarterly. 29(2000) no.3, S.21-47
  5. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.06
    0.06403849 = product of:
      0.16009623 = sum of:
        0.040024057 = product of:
          0.12007217 = sum of:
            0.12007217 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12007217 = score(doc=701,freq=2.0), product of:
                0.32046703 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037799787 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12007217 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12007217 = score(doc=701,freq=2.0), product of:
            0.32046703 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037799787 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  6. San Segundo, R.: ¬A new conception of representation of knowledge (2004) 0.04
    0.041020717 = product of:
      0.10255179 = sum of:
        0.09572332 = weight(_text_:compact in 3077) [ClassicSimilarity], result of:
          0.09572332 = score(doc=3077,freq=2.0), product of:
            0.28613505 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.037799787 = queryNorm
            0.33453897 = fieldWeight in 3077, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.03125 = fieldNorm(doc=3077)
        0.006828466 = product of:
          0.020485397 = sum of:
            0.020485397 = weight(_text_:22 in 3077) [ClassicSimilarity], result of:
              0.020485397 = score(doc=3077,freq=2.0), product of:
                0.13236842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037799787 = queryNorm
                0.15476047 = fieldWeight in 3077, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3077)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    The new term Representation of knowledge, applied to the framework of electronic segments of information, with comprehension of new material support for information, and a review and total conceptualisation of the terminology which is being applied, entails a review of all traditional documentary practices. Therefore, a definition of the concept of Representation of knowledge is indispensable. The term representation has been used in westere cultural and intellectual tradition to refer to the diverse ways that a subject comprehends an object. Representation is a process which requires the structure of natural language and human memory whereby it is interwoven in a subject and in conscience. However, at the present time, the term Representation of knowledge is applied to the processing of electronic information, combined with the aim of emulating the human mind in such a way that one has endeavoured to transfer, with great difficulty, the complex structurality of the conceptual representation of human knowledge to new digital information technologies. Thus, nowadays, representation of knowledge has taken an diverse meanings and it has focussed, for the moment, an certain structures and conceptual hierarchies which carry and transfer information, and has initially been based an the current representation of knowledge using artificial intelligence. The traditional languages of documentation, also referred to as languages of representation, offer a structured representation of conceptual fields, symbols and terms of natural and notational language, and they are the pillars for the necessary correspondence between the object or text and its representation. These correspondences, connections and symbolisations will be established within the electronic framework by means of different models and of the "goal" domain, which will give rise to organisations, structures, maps, networks and levels, as new electronic documents are not compact units but segments of information. Thus, the new representation of knowledge refers to data, images, figures and symbolised, treated, processed and structured ideas which replace or refer to documents within the framework of technical processing and the recuperation of electronic information.
    Date
    2. 1.2005 18:22:25
  7. Soricut, R.; Marcu, D.: Abstractive headline generation using WIDL-expressions (2007) 0.03
    0.033503164 = product of:
      0.16751581 = sum of:
        0.16751581 = weight(_text_:compact in 943) [ClassicSimilarity], result of:
          0.16751581 = score(doc=943,freq=2.0), product of:
            0.28613505 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.037799787 = queryNorm
            0.5854432 = fieldWeight in 943, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.0546875 = fieldNorm(doc=943)
      0.2 = coord(1/5)
    
    Abstract
    We present a new paradigm for the automatic creation of document headlines that is based on direct transformation of relevant textual information into well-formed textual output. Starting from an input document, we automatically create compact representations of weighted finite sets of strings, called WIDL-expressions, which encode the most important topics in the document. A generic natural language generation engine performs the headline generation task, driven by both statistical knowledge encapsulated in WIDL-expressions (representing topic biases induced by the input document) and statistical knowledge encapsulated in language models (representing biases induced by the target language). Our evaluation shows similar performance in quality with a state-of-the-art, extractive approach to headline generation, and significant improvements in quality over previously proposed solutions to abstractive headline generation.
  8. Roosa, M.: Sound and audio archives (2009) 0.03
    0.031940535 = product of:
      0.15970267 = sum of:
        0.15970267 = weight(_text_:discs in 3883) [ClassicSimilarity], result of:
          0.15970267 = score(doc=3883,freq=2.0), product of:
            0.30176762 = queryWeight, product of:
              7.983315 = idf(docFreq=40, maxDocs=44218)
              0.037799787 = queryNorm
            0.52922404 = fieldWeight in 3883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.983315 = idf(docFreq=40, maxDocs=44218)
              0.046875 = fieldNorm(doc=3883)
      0.2 = coord(1/5)
    
    Abstract
    This entry provides an overview of sound archives, including reasons why sound archives exist, their history and organization; types of sound recordings collected, methods of description, and access to and preservation of recorded sound materials. Recorded sound formats (e.g., cylinders, discs, long playing (LP) records, etc.) are covered in the context of how they have been (and are currently being) collected, described, preserved, and made available. Select projects and programs undertaken by regional, special, and national sound archives are covered. Professional associations that focus on sound archiving are described as are funding avenues for sound archives. Description is also included of work being carried out in the United States to modify copyright law to better enable sound archives to preserve their holdings for future generations of users.
  9. Egghe, L.; Rousseau, R.: ¬A measure for the cohesion of weighted networks (2003) 0.02
    0.02393083 = product of:
      0.11965415 = sum of:
        0.11965415 = weight(_text_:compact in 5157) [ClassicSimilarity], result of:
          0.11965415 = score(doc=5157,freq=2.0), product of:
            0.28613505 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.037799787 = queryNorm
            0.4181737 = fieldWeight in 5157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5157)
      0.2 = coord(1/5)
    
    Abstract
    Measurement of the degree of interconnectedness in graph like networks of hyperlinks or citations can indicate the existence of research fields and assist in comparative evaluation of research efforts. In this issue we begin with Egghe and Rousseau who review compactness measures and investigate the compactness of a network as a weighted graph with dissimilarity values characterizing the arcs between nodes. They make use of a generalization of the Botofogo, Rivlin, Shneiderman, (BRS) compaction measure which treats the distance between unreachable nodes not as infinity but rather as the number of nodes in the network. The dissimilarity values are determined by summing the reciprocals of the weights of the arcs in the shortest chain between two nodes where no weight is smaller than one. The BRS measure is then the maximum value for the sum of the dissimilarity measures less the actual sum divided by the difference between the maximum and minimum. The Wiener index, the sum of all elements in the dissimilarity matrix divided by two, is then computed for Small's particle physics co-citation data as well as the BRS measure, the dissimilarity values and shortest paths. The compactness measure for the weighted network is smaller than for the un-weighted. When the bibliographic coupling network is utilized it is shown to be less compact than the co-citation network which indicates that the new measure produces results that confirm to an obvious case.
  10. Broughton, V.: Essential Library of Congress Subject Headings (2009) 0.02
    0.02393083 = product of:
      0.11965415 = sum of:
        0.11965415 = weight(_text_:compact in 395) [ClassicSimilarity], result of:
          0.11965415 = score(doc=395,freq=2.0), product of:
            0.28613505 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.037799787 = queryNorm
            0.4181737 = fieldWeight in 395, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.0390625 = fieldNorm(doc=395)
      0.2 = coord(1/5)
    
    Abstract
    LCSH are increasingly seen as 'the' English language controlled vocabulary, despite their lack of a theoretical foundation, and their evident US bias. In mapping exercises between national subject heading lists, and in exercises in digital resource organization and management, LCSH are often chosen because of the lack of any other widely accepted English language standard for subject cataloguing. It is therefore important that the basic nature of LCSH, their advantages, and their limitations, are well understood both by LIS practitioners and those in the wider information community. Information professionals who attended library school before 1995 - and many more recent library school graduates - are unlikely to have had a formal introduction to LCSH. Paraprofessionals who undertake cataloguing are similarly unlikely to have enjoyed an induction to the broad principles of LCSH. There is currently no compact guide to LCSH written from a UK viewpoint, and this eminently practical text fills that gap. It features topics including: background and history of LCSH; subject heading lists; structure and display in LCSH; form of entry; application of LCSH; document analysis; main headings; topical, geographical and free-floating sub-divisions; building compound headings; name headings; headings for literature, art, music, history and law; and, LCSH in the online environment. There is a strong emphasis throughout on worked examples and practical exercises in the application of the scheme, and a full glossary of terms is supplied. No prior knowledge or experience of subject cataloguing is assumed. This is an indispensable guide to LCSH for practitioners and students alike from a well-known and popular author.
  11. Na, S.-H.; Kang, I.-S.; Lee, J.-H.: Parsimonious translation models for information retrieval (2007) 0.02
    0.02393083 = product of:
      0.11965415 = sum of:
        0.11965415 = weight(_text_:compact in 898) [ClassicSimilarity], result of:
          0.11965415 = score(doc=898,freq=2.0), product of:
            0.28613505 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.037799787 = queryNorm
            0.4181737 = fieldWeight in 898, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.0390625 = fieldNorm(doc=898)
      0.2 = coord(1/5)
    
    Abstract
    In the KL divergence framework, the extended language modeling approach has a critical problem of estimating a query model, which is the probabilistic model that encodes the user's information need. For query expansion in initial retrieval, the translation model had been proposed to involve term co-occurrence statistics. However, the translation model was difficult to apply, because the term co-occurrence statistics must be constructed in the offline time. Especially in a large collection, constructing such a large matrix of term co-occurrences statistics prohibitively increases time and space complexity. In addition, reliable retrieval performance cannot be guaranteed because the translation model may comprise noisy non-topical terms in documents. To resolve these problems, this paper investigates an effective method to construct co-occurrence statistics and eliminate noisy terms by employing a parsimonious translation model. The parsimonious translation model is a compact version of a translation model that can reduce the number of terms containing non-zero probabilities by eliminating non-topical terms in documents. Through experimentation on seven different test collections, we show that the query model estimated from the parsimonious translation model significantly outperforms not only the baseline language modeling, but also the non-parsimonious models.
  12. Hoeber, O.; Yang, X.D.: HotMap : supporting visual exploration of Web search results (2009) 0.02
    0.02393083 = product of:
      0.11965415 = sum of:
        0.11965415 = weight(_text_:compact in 2700) [ClassicSimilarity], result of:
          0.11965415 = score(doc=2700,freq=2.0), product of:
            0.28613505 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.037799787 = queryNorm
            0.4181737 = fieldWeight in 2700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2700)
      0.2 = coord(1/5)
    
    Abstract
    Although information retrieval techniques used by Web search engines have improved substantially over the years, the results of Web searches have continued to be represented in simple list-based formats. Although the list-based representation makes it easy to evaluate a single document for relevance, it does not support the users in the broader tasks of manipulating or exploring the search results as they attempt to find a collection of relevant documents. HotMap is a meta-search system that provides a compact visual representation of Web search results at two levels of detail, and it supports interactive exploration via nested sorting of Web search results based on query term frequencies. An evaluation of the search results for a set of vague queries has shown that the re-sorted search results can provide a higher portion of relevant documents among the top search results. User studies show an increase in speed and effectiveness and a reduction in missed documents when comparing HotMap to the list-based representation used by Google. Subjective measures were positive, and users showed a preference for the HotMap interface. These results provide evidence for the utility of next-generation Web search results interfaces that promote interactive search results exploration.
  13. Weinberg, B.H.: Book indexes in France : medieval specimens and modern practices (2000) 0.02
    0.019206481 = product of:
      0.0960324 = sum of:
        0.0960324 = product of:
          0.1440486 = sum of:
            0.07234972 = weight(_text_:29 in 486) [ClassicSimilarity], result of:
              0.07234972 = score(doc=486,freq=2.0), product of:
                0.13296783 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037799787 = queryNorm
                0.5441145 = fieldWeight in 486, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=486)
            0.07169889 = weight(_text_:22 in 486) [ClassicSimilarity], result of:
              0.07169889 = score(doc=486,freq=2.0), product of:
                0.13236842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037799787 = queryNorm
                0.5416616 = fieldWeight in 486, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=486)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    20. 4.2002 19:29:54
    Source
    Indexer. 22(2000) no.1, S.2-13
  14. Mauer, P.: Embedded indexing : pros and cons for the indexer (2000) 0.02
    0.019206481 = product of:
      0.0960324 = sum of:
        0.0960324 = product of:
          0.1440486 = sum of:
            0.07234972 = weight(_text_:29 in 488) [ClassicSimilarity], result of:
              0.07234972 = score(doc=488,freq=2.0), product of:
                0.13296783 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037799787 = queryNorm
                0.5441145 = fieldWeight in 488, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=488)
            0.07169889 = weight(_text_:22 in 488) [ClassicSimilarity], result of:
              0.07169889 = score(doc=488,freq=2.0), product of:
                0.13236842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037799787 = queryNorm
                0.5416616 = fieldWeight in 488, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=488)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    21. 4.2002 9:29:38
    Source
    Indexer. 22(2000) no.1, S.27-28
  15. Dominich, S.; Kiezer, T.: ¬A measure theoretic approach to information retrieval (2007) 0.02
    0.019144665 = product of:
      0.09572332 = sum of:
        0.09572332 = weight(_text_:compact in 445) [ClassicSimilarity], result of:
          0.09572332 = score(doc=445,freq=2.0), product of:
            0.28613505 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.037799787 = queryNorm
            0.33453897 = fieldWeight in 445, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.03125 = fieldNorm(doc=445)
      0.2 = coord(1/5)
    
    Abstract
    The vector space model of information retrieval is one of the classical and widely applied retrieval models. Paradoxically, it has been characterized by a discrepancy between its formal framework and implementable form. The underlying concepts of the vector space model are mathematical terms: linear space, vector, and inner product. However, in the vector space model, the mathematical meaning of these concepts is not preserved. They are used as mere computational constructs or metaphors. Thus, the vector space model actually does not follow formally from the mathematical concepts on which it has been claimed to rest. This problem has been recognized for more than two decades, but no proper solution has emerged so far. The present article proposes a solution to this problem. First, the concept of retrieval is defined based on the mathematical measure theory. Then, retrieval is particularized using fuzzy set theory. As a result, the retrieval function is conceived as the cardinality of the intersection of two fuzzy sets. This view makes it possible to build a connection to linear spaces. It is shown that the classical and the generalized vector space models, as well as the latent semantic indexing model, gain a correct formal background with which they are consistent. At the same time it becomes clear that the inner product is not a necessary ingredient of the vector space model, and hence of Information Retrieval (IR). The Principle of Object Invariance is introduced to handle this situation. Moreover, this view makes it possible to consistently formulate new retrieval methods: in linear space with general basis, entropy-based, and probability-based. It is also shown that Information Retrieval may be viewed as integral calculus, and thus it gains a very compact and elegant mathematical way of writing. Also, Information Retrieval may thus be conceived as an application of mathematical measure theory.
  16. Dunlavy, D.M.; O'Leary, D.P.; Conroy, J.M.; Schlesinger, J.D.: QCS: A system for querying, clustering and summarizing documents (2007) 0.02
    0.019144665 = product of:
      0.09572332 = sum of:
        0.09572332 = weight(_text_:compact in 947) [ClassicSimilarity], result of:
          0.09572332 = score(doc=947,freq=2.0), product of:
            0.28613505 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.037799787 = queryNorm
            0.33453897 = fieldWeight in 947, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.03125 = fieldNorm(doc=947)
      0.2 = coord(1/5)
    
    Abstract
    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel integrated information retrieval system-the Query, Cluster, Summarize (QCS) system-which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of methods in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) as measured by the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence "trimming" and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.
  17. Baker, T.; Dekkers, M.: Identifying metadata elements with URIs : The CORES resolution (2003) 0.02
    0.019144665 = product of:
      0.09572332 = sum of:
        0.09572332 = weight(_text_:compact in 1199) [ClassicSimilarity], result of:
          0.09572332 = score(doc=1199,freq=2.0), product of:
            0.28613505 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.037799787 = queryNorm
            0.33453897 = fieldWeight in 1199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.03125 = fieldNorm(doc=1199)
      0.2 = coord(1/5)
    
    Abstract
    On 18 November 2002, at a meeting organised by the CORES Project (Information Society Technologies Programme, European Union), several organisations regarded as maintenance authorities for metadata elements achieved consensus on a resolution to assign Uniform Resource Identifiers (URIs) to metadata elements as a useful first step towards the development of mapping infrastructures and interoperability services. The signatories of the CORES Resolution agreed to promote this consensus in their communities and beyond and to implement an action plan in the following six months. Six months having passed, the maintainers of GILS, ONIX, MARC 21, CERIF, DOI, IEEE/LOM, and Dublin Core report on their implementations of the resolution and highlight issues of relevance to establishing good-practice conventions for declaring, identifying, and maintaining metadata elements more generally. In June 2003, the resolution was also endorsed by the maintainers of UNIMARC. The "Resolution on Metadata Element Identifiers", or CORES Resolution, is an agreement among the maintenance organisations for several major metadata standards - GILS, ONIX, MARC 21, UNIMARC, CERIF, DOI®, IEEE/LOM, and Dublin Core - to identify their metadata elements using Uniform Resource Identifiers (URIs). The Uniform Resource Identifier, defined in the IETF RFC 2396 as "a compact string of characters for identifying an abstract or physical resource", has been promoted for use as a universal form of identification by the World Wide Web Consortium. The CORES Resolution, formulated at a meeting organised by the European project CORES in November 2002, included a commitment to publicise the consensus statement to a wider audience of metadata standards initiatives and to implement key points of the agreement within the following six months - specifically, to define URI assignment mechanisms, assign URIs to elements, and formulate policies for the persistence of those URIs. This article marks the passage of six months by reporting on progress made in implementing this common action plan. After presenting the text of the CORES Resolution and its three "clarifications", the article summarises the position of each signatory organisation towards assigning URIs to its metadata elements, noting any practical or strategic problems that may have emerged. These progress reports were based on input from Thomas Baker, José Borbinha, Eliot Christian, Erik Duval, Keith Jeffery, Rebecca Guenther, and Norman Paskin. The article closes with a few general observations about these first steps towards the clarification of shared conventions for the identification of metadata elements and perhaps, one can hope, towards the ultimate goal of improving interoperability among a diversity of metadata communities.
  18. Hippel, E. von: Democratizing innovation (2005) 0.02
    0.019144665 = product of:
      0.09572332 = sum of:
        0.09572332 = weight(_text_:compact in 1928) [ClassicSimilarity], result of:
          0.09572332 = score(doc=1928,freq=2.0), product of:
            0.28613505 = queryWeight, product of:
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.037799787 = queryNorm
            0.33453897 = fieldWeight in 1928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5697527 = idf(docFreq=61, maxDocs=44218)
              0.03125 = fieldNorm(doc=1928)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: JASIST 57(2006) no.12, S.1711-1712 (J. Cullen): "Compact concise texts usually prove more difficult to review than large detailed tomes. How does the reviewer summarize, analyze, and critique when the author has successfully achieved this on behalf of the reader? Writers such as Eric Von Hippel communicate their authority in accessible terms, and with an amazing economy of language, that it is difficult not to be impressed with their arguments. The appeal of concepts such as lead-user innovation comes at a time when the issue of innovation is front-and-center of enterprise policy in many developed economies. In my country, Ireland, we have settled into a period of largely positive growth forecasts following a brief downturn in the wake of our "Celtic Tiger" boom years. As the mist of the post-boom negatives (high-inflation, transport congestion, etc.) clears, like other Western nations we increasingly observe the outsourcing of traditional manufacturing industries to lower-cost economies. We are left with little doubt that sustaining our prosperity requires developing of a culture of business creativity and innovation. The emergence of potent social and business forces underline how customization of established products and technologies is something that has long appealed to creative experimenters. Von Hippel has been something of an advocate for the role of user as innovator for some time, a role which challenges the paradigm of manufacturers innovating, and users consuming. From the outset, he acknowledges the fears how open source and customer-led innovation might sound to firms who invest heavily in research and development, but provides a compelling argument for its benefits. . . . Finally, Von Hippel "puts his money where his mouth is" so-tospeak, and makes an electronic version of Democratizing Innovation available for download under a Creative Commons license."
  19. Ruskai, M.B.: Response to Graham : the quantum view (2001) 0.02
    0.016462699 = product of:
      0.08231349 = sum of:
        0.08231349 = product of:
          0.12347024 = sum of:
            0.062014047 = weight(_text_:29 in 5779) [ClassicSimilarity], result of:
              0.062014047 = score(doc=5779,freq=2.0), product of:
                0.13296783 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037799787 = queryNorm
                0.46638384 = fieldWeight in 5779, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5779)
            0.061456192 = weight(_text_:22 in 5779) [ClassicSimilarity], result of:
              0.061456192 = score(doc=5779,freq=2.0), product of:
                0.13236842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037799787 = queryNorm
                0.46428138 = fieldWeight in 5779, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5779)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Footnote
    Erwiderung auf: Graham, L.R.: Do mathematical equations display social attributes? in: Mathematical intelligencer 22(2000) no.3, S.31-36
    Source
    Mathematical intelligencer. 23(2001) no.1, S.23-29
  20. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.02
    0.016462699 = product of:
      0.08231349 = sum of:
        0.08231349 = product of:
          0.12347024 = sum of:
            0.062014047 = weight(_text_:29 in 6040) [ClassicSimilarity], result of:
              0.062014047 = score(doc=6040,freq=2.0), product of:
                0.13296783 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037799787 = queryNorm
                0.46638384 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
            0.061456192 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.061456192 = score(doc=6040,freq=2.0), product of:
                0.13236842 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037799787 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    22. 6.2002 19:42:47
    Source
    International cataloguing and bibliographic control. 29(2000) no.3, S.45-48

Authors

Types

  • a 1223
  • m 105
  • el 85
  • s 54
  • b 24
  • x 3
  • i 2
  • r 2
  • n 1
  • p 1
  • More… Less…

Themes

Subjects

Classifications