Search (7364 results, page 1 of 369)

  • × year_i:[2000 TO 2010}
  1. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.38
    0.38448018 = product of:
      0.51264024 = sum of:
        0.06558679 = product of:
          0.19676036 = sum of:
            0.19676036 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.19676036 = score(doc=2918,freq=2.0), product of:
                0.35009617 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041294612 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.19676036 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.19676036 = score(doc=2918,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.025667597 = weight(_text_:use in 2918) [ClassicSimilarity], result of:
          0.025667597 = score(doc=2918,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.016396983 = weight(_text_:of in 2918) [ClassicSimilarity], result of:
          0.016396983 = score(doc=2918,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25392252 = fieldWeight in 2918, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.19676036 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.19676036 = score(doc=2918,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.011468184 = product of:
          0.022936368 = sum of:
            0.022936368 = weight(_text_:on in 2918) [ClassicSimilarity], result of:
              0.022936368 = score(doc=2918,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.25253648 = fieldWeight in 2918, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.5 = coord(1/2)
      0.75 = coord(6/8)
    
    Abstract
    The employees of an organization often use a personal hierarchical classification scheme to organize digital documents that are stored on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk that the organization will lose track of needed documentation. Furthermore, the inherent boundaries of such a hierarchical structure require making arbitrary decisions about which specific criteria the classification will b.e based on (for instance, the administrative activity or the document type, although a document can have several attributes and require classification in several classes).A faceted classification model to support corporate information organization is proposed. Partially based on Ranganathan's facets theory, this model aims not only to standardize the organization of digital documents, but also to simplify the management of a document throughout its life cycle for both individuals and organizations, while ensuring compliance to regulatory and policy requirements.
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.38
    0.37924933 = product of:
      0.5056658 = sum of:
        0.06558679 = product of:
          0.19676036 = sum of:
            0.19676036 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.19676036 = score(doc=562,freq=2.0), product of:
                0.35009617 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041294612 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.19676036 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.19676036 = score(doc=562,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.014968331 = weight(_text_:of in 562) [ClassicSimilarity], result of:
          0.014968331 = score(doc=562,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.23179851 = fieldWeight in 562, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.19676036 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.19676036 = score(doc=562,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.014805362 = product of:
          0.029610723 = sum of:
            0.029610723 = weight(_text_:on in 562) [ClassicSimilarity], result of:
              0.029610723 = score(doc=562,freq=10.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.32602316 = fieldWeight in 562, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
        0.016784549 = product of:
          0.033569098 = sum of:
            0.033569098 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.033569098 = score(doc=562,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(6/8)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Source
    Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 1-4 November 2004, Brighton, UK
  3. Buxton, A.; Hopkinson, A.: ¬The CDS/ISIS for Windows handbook (2001) 0.31
    0.3136469 = product of:
      0.50183505 = sum of:
        0.102258734 = weight(_text_:retrieval in 775) [ClassicSimilarity], result of:
          0.102258734 = score(doc=775,freq=12.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.81864166 = fieldWeight in 775, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=775)
        0.06049911 = weight(_text_:use in 775) [ClassicSimilarity], result of:
          0.06049911 = score(doc=775,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.47845137 = fieldWeight in 775, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.078125 = fieldNorm(doc=775)
        0.015778005 = weight(_text_:of in 775) [ClassicSimilarity], result of:
          0.015778005 = score(doc=775,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24433708 = fieldWeight in 775, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=775)
        0.23412317 = sum of:
          0.031212443 = weight(_text_:on in 775) [ClassicSimilarity], result of:
            0.031212443 = score(doc=775,freq=4.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.3436586 = fieldWeight in 775, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.078125 = fieldNorm(doc=775)
          0.20291072 = weight(_text_:line in 775) [ClassicSimilarity], result of:
            0.20291072 = score(doc=775,freq=4.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.87622535 = fieldWeight in 775, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.078125 = fieldNorm(doc=775)
        0.08917602 = product of:
          0.17835204 = sum of:
            0.17835204 = weight(_text_:computers in 775) [ClassicSimilarity], result of:
              0.17835204 = score(doc=775,freq=4.0), product of:
                0.21710795 = queryWeight, product of:
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.041294612 = queryNorm
                0.82149017 = fieldWeight in 775, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.078125 = fieldNorm(doc=775)
          0.5 = coord(1/2)
      0.625 = coord(5/8)
    
    COMPASS
    Information retrieval / Use of / On-line computers
    LCSH
    ISIS (Information retrieval system) / Handbooks, manuals, etc.
    Information storage and retrieval systems / Handbooks, manuals, etc.
    Subject
    ISIS (Information retrieval system) / Handbooks, manuals, etc.
    Information storage and retrieval systems / Handbooks, manuals, etc.
    Information retrieval / Use of / On-line computers
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.29
    0.2931133 = product of:
      0.3908177 = sum of:
        0.04372453 = product of:
          0.13117358 = sum of:
            0.13117358 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13117358 = score(doc=701,freq=2.0), product of:
                0.35009617 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041294612 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.062481117 = weight(_text_:retrieval in 701) [ClassicSimilarity], result of:
          0.062481117 = score(doc=701,freq=28.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.5001983 = fieldWeight in 701, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.13117358 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13117358 = score(doc=701,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.017850775 = weight(_text_:of in 701) [ClassicSimilarity], result of:
          0.017850775 = score(doc=701,freq=32.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.27643585 = fieldWeight in 701, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.13117358 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13117358 = score(doc=701,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.004414106 = product of:
          0.008828212 = sum of:
            0.008828212 = weight(_text_:on in 701) [ClassicSimilarity], result of:
              0.008828212 = score(doc=701,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.097201325 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.75 = coord(6/8)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.28
    0.27814403 = product of:
      0.55628806 = sum of:
        0.076517925 = product of:
          0.22955377 = sum of:
            0.22955377 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.22955377 = score(doc=306,freq=2.0), product of:
                0.35009617 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041294612 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.22955377 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.22955377 = score(doc=306,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.020662563 = weight(_text_:of in 306) [ClassicSimilarity], result of:
          0.020662563 = score(doc=306,freq=14.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.31997898 = fieldWeight in 306, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.22955377 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.22955377 = score(doc=306,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.5 = coord(4/8)
    
    Abstract
    Although service-oriented architectures go a long way toward providing interoperability in distributed, heterogeneous environments, managing semantic differences in such environments remains a challenge. We give an overview of the issue of semantic interoperability (integration), provide a semantic characterization of services, and discuss the role of ontologies. Then we analyze four basic models of semantic interoperability that differ in respect to their mapping between service descriptions and ontologies and in respect to where the evaluation of the integration logic is performed. We also provide some guidelines for selecting one of the possible interoperability models.
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  6. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.23
    0.22955377 = product of:
      0.6121434 = sum of:
        0.08744906 = product of:
          0.26234716 = sum of:
            0.26234716 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.26234716 = score(doc=140,freq=2.0), product of:
                0.35009617 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041294612 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
        0.26234716 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.26234716 = score(doc=140,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
        0.26234716 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.26234716 = score(doc=140,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.375 = coord(3/8)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  7. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.14
    0.14347109 = product of:
      0.38258958 = sum of:
        0.054655656 = product of:
          0.16396697 = sum of:
            0.16396697 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.16396697 = score(doc=5895,freq=2.0), product of:
                0.35009617 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041294612 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.16396697 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.16396697 = score(doc=5895,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.16396697 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.16396697 = score(doc=5895,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.375 = coord(3/8)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  8. Greiff, W.R.: ¬The use of exploratory data analysis in information retrieval research (2000) 0.11
    0.113758355 = product of:
      0.22751671 = sum of:
        0.06135524 = weight(_text_:retrieval in 32) [ClassicSimilarity], result of:
          0.06135524 = score(doc=32,freq=12.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.49118498 = fieldWeight in 32, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=32)
        0.036299463 = weight(_text_:use in 32) [ClassicSimilarity], result of:
          0.036299463 = score(doc=32,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.2870708 = fieldWeight in 32, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=32)
        0.025046807 = weight(_text_:of in 32) [ClassicSimilarity], result of:
          0.025046807 = score(doc=32,freq=28.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.38787308 = fieldWeight in 32, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=32)
        0.10481519 = sum of:
          0.018727465 = weight(_text_:on in 32) [ClassicSimilarity], result of:
            0.018727465 = score(doc=32,freq=4.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.20619515 = fieldWeight in 32, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.046875 = fieldNorm(doc=32)
          0.086087726 = weight(_text_:line in 32) [ClassicSimilarity], result of:
            0.086087726 = score(doc=32,freq=2.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.37175092 = fieldWeight in 32, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.046875 = fieldNorm(doc=32)
      0.5 = coord(4/8)
    
    Abstract
    We report on a line of work in which techniques of Exploratory Data Analysis (EDA) have been used as a vehicle for better understanding of the issues confronting the researcher in information retrieval (IR). EDA is used for visualizing and studying data for the purpose of uncovering statistical regularities that might not be apparent otherwise. The analysis is carried out in terms of the formal notion of Weight of Evidence (WOE). As a result of this analysis, a novel theory in support of the use of inverse document frequency (idf) for document ranking is presented, and experimental evidence is given in favor of a modification of the classical idf formula motivated by the analysis. This approach is then extended to other sources of evidence commonly used for ranking in information retrieval systems
    Series
    The Kluwer international series on information retrieval; 7
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  9. Smith, A.G.: Search features of digital libraries (2000) 0.11
    0.11199682 = product of:
      0.22399364 = sum of:
        0.025048172 = weight(_text_:retrieval in 940) [ClassicSimilarity], result of:
          0.025048172 = score(doc=940,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.20052543 = fieldWeight in 940, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=940)
        0.025667597 = weight(_text_:use in 940) [ClassicSimilarity], result of:
          0.025667597 = score(doc=940,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 940, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=940)
        0.025046807 = weight(_text_:of in 940) [ClassicSimilarity], result of:
          0.025046807 = score(doc=940,freq=28.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.38787308 = fieldWeight in 940, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=940)
        0.14823107 = sum of:
          0.026484637 = weight(_text_:on in 940) [ClassicSimilarity], result of:
            0.026484637 = score(doc=940,freq=8.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.29160398 = fieldWeight in 940, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.046875 = fieldNorm(doc=940)
          0.12174644 = weight(_text_:line in 940) [ClassicSimilarity], result of:
            0.12174644 = score(doc=940,freq=4.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.52573526 = fieldWeight in 940, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.046875 = fieldNorm(doc=940)
      0.5 = coord(4/8)
    
    Abstract
    Traditional on-line search services such as Dialog, DataStar and Lexis provide a wide range of search features (boolean and proximity operators, truncation, etc). This paper discusses the use of these features for effective searching, and argues that these features are required, regardless of advances in search engine technology. The literature on on-line searching is reviewed, identifying features that searchers find desirable for effective searching. A selective survey of current digital libraries available on the Web was undertaken, identifying which search features are present. The survey indicates that current digital libraries do not implement a wide range of search features. For instance: under half of the examples included controlled vocabulary, under half had proximity searching, only one enabled browsing of term indexes, and none of the digital libraries enable searchers to refine an initial search. Suggestions are made for enhancing the search effectiveness of digital libraries; for instance, by providing a full range of search operators, enabling browsing of search terms, enhancement of records with controlled vocabulary, enabling the refining of initial searches, etc.
    Content
    Enthält eine Zusammenstellung der Werkzeuge und Hilfsmittel des Information Retrieval
  10. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.11
    0.10531742 = product of:
      0.21063484 = sum of:
        0.047231287 = weight(_text_:retrieval in 5108) [ClassicSimilarity], result of:
          0.047231287 = score(doc=5108,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.37811437 = fieldWeight in 5108, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=5108)
        0.008925388 = weight(_text_:of in 5108) [ClassicSimilarity], result of:
          0.008925388 = score(doc=5108,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.13821793 = fieldWeight in 5108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5108)
        0.008828212 = product of:
          0.017656423 = sum of:
            0.017656423 = weight(_text_:on in 5108) [ClassicSimilarity], result of:
              0.017656423 = score(doc=5108,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19440265 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
        0.14564995 = sum of:
          0.10089115 = weight(_text_:computers in 5108) [ClassicSimilarity], result of:
            0.10089115 = score(doc=5108,freq=2.0), product of:
              0.21710795 = queryWeight, product of:
                5.257537 = idf(docFreq=625, maxDocs=44218)
                0.041294612 = queryNorm
              0.464705 = fieldWeight in 5108, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.257537 = idf(docFreq=625, maxDocs=44218)
                0.0625 = fieldNorm(doc=5108)
          0.0447588 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
            0.0447588 = score(doc=5108,freq=2.0), product of:
              0.1446067 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041294612 = queryNorm
              0.30952093 = fieldWeight in 5108, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5108)
      0.5 = coord(4/8)
    
    Abstract
    In this paper methods for both speeding up passage processing and examining more passages using parallel computers are explored. The number of passages processed are varied in order to examine the effect on retrieval effectiveness and efficiency. The particular algorithm applied has previously been used to good effect in Okapi experiments at TREC. This algorithm and the mechanism for applying parallel computing to speed up processing are described.
    Date
    20. 1.2007 18:30:22
  11. Whitney , C.; Schiff, L.: ¬The Melvyl Recommender Project : developing library recommendation services (2006) 0.09
    0.0949631 = product of:
      0.1899262 = sum of:
        0.035423465 = weight(_text_:retrieval in 1173) [ClassicSimilarity], result of:
          0.035423465 = score(doc=1173,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.2835858 = fieldWeight in 1173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1173)
        0.036299463 = weight(_text_:use in 1173) [ClassicSimilarity], result of:
          0.036299463 = score(doc=1173,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.2870708 = fieldWeight in 1173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=1173)
        0.013388081 = weight(_text_:of in 1173) [ClassicSimilarity], result of:
          0.013388081 = score(doc=1173,freq=8.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 1173, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1173)
        0.10481519 = sum of:
          0.018727465 = weight(_text_:on in 1173) [ClassicSimilarity], result of:
            0.018727465 = score(doc=1173,freq=4.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.20619515 = fieldWeight in 1173, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.046875 = fieldNorm(doc=1173)
          0.086087726 = weight(_text_:line in 1173) [ClassicSimilarity], result of:
            0.086087726 = score(doc=1173,freq=2.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.37175092 = fieldWeight in 1173, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.046875 = fieldNorm(doc=1173)
      0.5 = coord(4/8)
    
    Abstract
    Popular commercial on-line services such as Google, e-Bay, Amazon, and Netflix have evolved quickly over the last decade to help people find what they want, developing information retrieval strategies such as usefully ranked results, spelling correction, and recommender systems. Online library catalogs (OPACs), in contrast, have changed little and are notoriously difficult for patrons to use (University of California Libraries, 2005). Over the past year (June 2005 to the present), the Melvyl Recommender Project (California Digital Library, 2005) has been exploring methods and feasibility of closing the gap between features that library patrons want and have come to expect from information retrieval systems and what libraries are currently equipped to deliver. The project team conducted exploratory work in five topic areas: relevance ranking, auto-correction, use of a text-based discovery system, user interface strategies, and recommending. This article focuses specifically on the recommending portion of the project and potential extensions to that work.
  12. Herrero-Solana, V.; Moya Anegón, F. de: Graphical Table of Contents (GTOC) for library collections : the application of UDC codes for the subject maps (2003) 0.09
    0.090174 = product of:
      0.14427839 = sum of:
        0.016698781 = weight(_text_:retrieval in 2758) [ClassicSimilarity], result of:
          0.016698781 = score(doc=2758,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.13368362 = fieldWeight in 2758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2758)
        0.024199642 = weight(_text_:use in 2758) [ClassicSimilarity], result of:
          0.024199642 = score(doc=2758,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.19138055 = fieldWeight in 2758, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=2758)
        0.02231347 = weight(_text_:of in 2758) [ClassicSimilarity], result of:
          0.02231347 = score(doc=2758,freq=50.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34554482 = fieldWeight in 2758, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2758)
        0.0698768 = sum of:
          0.012484977 = weight(_text_:on in 2758) [ClassicSimilarity], result of:
            0.012484977 = score(doc=2758,freq=4.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.13746344 = fieldWeight in 2758, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.03125 = fieldNorm(doc=2758)
          0.05739182 = weight(_text_:line in 2758) [ClassicSimilarity], result of:
            0.05739182 = score(doc=2758,freq=2.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.24783395 = fieldWeight in 2758, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.03125 = fieldNorm(doc=2758)
        0.0111897 = product of:
          0.0223794 = sum of:
            0.0223794 = weight(_text_:22 in 2758) [ClassicSimilarity], result of:
              0.0223794 = score(doc=2758,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.15476047 = fieldWeight in 2758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2758)
          0.5 = coord(1/2)
      0.625 = coord(5/8)
    
    Abstract
    The representation of information contents by graphical maps is an extended ongoing research topic. In this paper we introduce the application of UDC codes for the subject maps development. We use the following graphic representation methodologies: 1) Multidimensional scaling (MDS), 2) Cluster analysis, 3) Neural networks (Self Organizing Map - SOM). Finally, we conclude about the application viability of every kind of map. 1. Introduction Advanced techniques for Information Retrieval (IR) currently make up one of the most active areas for research in the field of library and information science. New models representing document content are replacing the classic systems in which the search terms supplied by the user were compared against the indexing terms existing in the inverted files of a database. One of the topics most often studied in the last years is bibliographic browsing, a good complement to querying strategies. Since the 80's, many authors have treated this topic. For example, Ellis establishes that browsing is based an three different types of tasks: identification, familiarization and differentiation (Ellis, 1989). On the other hand, Cove indicates three different browsing types: searching browsing, general purpose browsing and serendipity browsing (Cove, 1988). Marcia Bates presents six different types (Bates, 1989), although the classification of Bawden is the one that really interests us: 1) similarity comparison, 2) structure driven, 3) global vision (Bawden, 1993). The global vision browsing implies the use of graphic representations, which we will call map displays, that allow the user to get a global idea of the nature and structure of the information in the database. In the 90's, several authors worked an this research line, developing different types of maps. One of the most active was Xia Lin what introduced the concept of Graphical Table of Contents (GTOC), comparing the maps to true table of contents based an graphic representations (Lin 1996). Lin applies the algorithm SOM to his own personal bibliography, analyzed in function of the words of the title and abstract fields, and represented in a two-dimensional map (Lin 1997). Later on, Lin applied this type of maps to create websites GTOCs, through a Java application.
    Date
    12. 9.2004 14:31:22
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  13. Beccaria, M.; Scott, D.: Fac-Back-OPAC : an open source interface to your library system (2007) 0.09
    0.08932037 = product of:
      0.17864074 = sum of:
        0.029945528 = weight(_text_:use in 2207) [ClassicSimilarity], result of:
          0.029945528 = score(doc=2207,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 2207, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2207)
        0.013526822 = weight(_text_:of in 2207) [ClassicSimilarity], result of:
          0.013526822 = score(doc=2207,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20947541 = fieldWeight in 2207, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2207)
        0.007724685 = product of:
          0.01544937 = sum of:
            0.01544937 = weight(_text_:on in 2207) [ClassicSimilarity], result of:
              0.01544937 = score(doc=2207,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.17010231 = fieldWeight in 2207, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2207)
          0.5 = coord(1/2)
        0.1274437 = sum of:
          0.088279754 = weight(_text_:computers in 2207) [ClassicSimilarity], result of:
            0.088279754 = score(doc=2207,freq=2.0), product of:
              0.21710795 = queryWeight, product of:
                5.257537 = idf(docFreq=625, maxDocs=44218)
                0.041294612 = queryNorm
              0.40661687 = fieldWeight in 2207, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.257537 = idf(docFreq=625, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2207)
          0.039163947 = weight(_text_:22 in 2207) [ClassicSimilarity], result of:
            0.039163947 = score(doc=2207,freq=2.0), product of:
              0.1446067 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041294612 = queryNorm
              0.2708308 = fieldWeight in 2207, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2207)
      0.5 = coord(4/8)
    
    Abstract
    Fac-Back-OPAC is a faceted back­ up OPAC. This advanced catalog offers features that compare favorably with the traditional catalogs for today's library systems. Fac-Back-OPAC represents the convergence of two prominent trends in library tools: the decoupling of discovery tools from the traditional integrated library system and the use of readily available open source components to rapidly produce leading-edge technology for meeting patron and library needs. Built on code that was originally developed by Casey Durfee in February 2007, Fac-Back-OPAC is available for no cost under an open source license to any library that wants to offer an advanced search interface or a backup catalog for its patrons.
    Date
    17. 8.2008 11:22:47
    Source
    Computers in libraries. 27(2007) no.9, S.6-
  14. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.09
    0.088095516 = product of:
      0.14095283 = sum of:
        0.029222867 = weight(_text_:retrieval in 88) [ClassicSimilarity], result of:
          0.029222867 = score(doc=88,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23394634 = fieldWeight in 88, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=88)
        0.066960245 = weight(_text_:use in 88) [ClassicSimilarity], result of:
          0.066960245 = score(doc=88,freq=10.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.52954865 = fieldWeight in 88, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=88)
        0.017463053 = weight(_text_:of in 88) [ClassicSimilarity], result of:
          0.017463053 = score(doc=88,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2704316 = fieldWeight in 88, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=88)
        0.007724685 = product of:
          0.01544937 = sum of:
            0.01544937 = weight(_text_:on in 88) [ClassicSimilarity], result of:
              0.01544937 = score(doc=88,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.17010231 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
        0.019581974 = product of:
          0.039163947 = sum of:
            0.039163947 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
              0.039163947 = score(doc=88,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.2708308 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
      0.625 = coord(5/8)
    
    Abstract
    This paper presents a study conducted at the University of Colorado at Boulder (CU-Boulder) to assess the extent to which its catalogers were using Classification Web (Class Web), the subscription-based, online cataloging documentation resource provided by the Library of Congress. In addition, this paper will explore assumptions made by management regarding CU-Boulder catalogers' use of the product, possible reasons for the lower-than-expected use, and recommendations for promoting a more efficient and cost-effective use of Class Web at other institutions similar to CU-Boulder.
    Date
    10. 9.2000 17:38:22
    Theme
    Klassifikationssysteme im Online-Retrieval
  15. Chylkowska, E.: Implementation of information exchange : online dictionaries (2005) 0.09
    0.08734034 = product of:
      0.17468068 = sum of:
        0.021389665 = weight(_text_:use in 3011) [ClassicSimilarity], result of:
          0.021389665 = score(doc=3011,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.1691581 = fieldWeight in 3011, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3011)
        0.015778005 = weight(_text_:of in 3011) [ClassicSimilarity], result of:
          0.015778005 = score(doc=3011,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24433708 = fieldWeight in 3011, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3011)
        0.12352589 = sum of:
          0.022070529 = weight(_text_:on in 3011) [ClassicSimilarity], result of:
            0.022070529 = score(doc=3011,freq=8.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.24300331 = fieldWeight in 3011, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3011)
          0.10145536 = weight(_text_:line in 3011) [ClassicSimilarity], result of:
            0.10145536 = score(doc=3011,freq=4.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.43811268 = fieldWeight in 3011, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3011)
        0.013987125 = product of:
          0.02797425 = sum of:
            0.02797425 = weight(_text_:22 in 3011) [ClassicSimilarity], result of:
              0.02797425 = score(doc=3011,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19345059 = fieldWeight in 3011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3011)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    We are living in a society in which using Internet is a part of everyday life. People use Internet at schools, universities, at work in small and big companies. The Web gives huge number of information from every possible field of knowledge, and one of the problems that one can face by searching through the web is the fact that this information may be written in many different languages that one does not understand. That is why web site designers came up with an idea to create on-line dictionaries to make surfing on the Web easier. The most popular are bilingual dictionaries (in Poland the most known are: LING.pl, LEKSYKA.pl, and Dict.pl), but one can find also multilingual ones (Logos.com, Lexicool.com). Nowadays, when using Internet in education becomes more and more popular, on-line dictionaries are the best supplement for a good quality work. The purpose of this paper is to present, compare and recommend the best (from the author's point of view) multilingual dictionaries that can be found on the Internet and that can serve educational purposes well.
    Date
    22. 7.2009 11:05:56
    Source
    Librarianship in the information age: Proceedings of the 13th BOBCATSSS Symposium, 31 January - 2 February 2005 in Budapest, Hungary. Eds.: Marte Langeland u.a
  16. Johnson, E.H.: Objects for distributed heterogeneous information retrieval (2000) 0.09
    0.08716693 = product of:
      0.17433386 = sum of:
        0.029519552 = weight(_text_:retrieval in 6959) [ClassicSimilarity], result of:
          0.029519552 = score(doc=6959,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23632148 = fieldWeight in 6959, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.037047986 = weight(_text_:use in 6959) [ClassicSimilarity], result of:
          0.037047986 = score(doc=6959,freq=6.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.29299045 = fieldWeight in 6959, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.0167351 = weight(_text_:of in 6959) [ClassicSimilarity], result of:
          0.0167351 = score(doc=6959,freq=18.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25915858 = fieldWeight in 6959, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.091031216 = sum of:
          0.06305697 = weight(_text_:computers in 6959) [ClassicSimilarity], result of:
            0.06305697 = score(doc=6959,freq=2.0), product of:
              0.21710795 = queryWeight, product of:
                5.257537 = idf(docFreq=625, maxDocs=44218)
                0.041294612 = queryNorm
              0.29044062 = fieldWeight in 6959, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.257537 = idf(docFreq=625, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6959)
          0.02797425 = weight(_text_:22 in 6959) [ClassicSimilarity], result of:
            0.02797425 = score(doc=6959,freq=2.0), product of:
              0.1446067 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041294612 = queryNorm
              0.19345059 = fieldWeight in 6959, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6959)
      0.5 = coord(4/8)
    
    Abstract
    The success of the World Wide Web Shows that we can access, search, and retrieve information from globally distributed databases. lf a database, such as a library catalog, has some sort of Web-based front end, we can type its URL into a Web browser and use its HTML-based forms to search for items in that database. Depending an how well the query conforms to the database content, how the search engine interprets the query, and how the server formats the results into HTML, we might actually find something usable. While the first two issues depend an ourselves and the server, an the Web the latter falls to the mercy of HTML, which we all know as a great destroyer of information because it codes for display but not for content description. When looking at an HTML-formatted display, we must depend an our own interpretation to recognize such entities as author names, titles, and subject identifiers. The Web browser can do nothing but display the information. lf we want some other view of the result, such as sorting the records by date (provided it offers such an option to begin with), the server must do it. This makes poor use of the computing power we have at the desktop (or even laptop), which, unless it involves retrieving more records, could easily do the result Set manipulation that we currently send back to the server. Despite having personal computers wich immense computational power, as far as information retrieval goes, we still essentially use them as dumb terminals.
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Saving the time of the library user through subject access innovation: Papers in honor of Pauline Atherton Cochrane. Ed.: W.J. Wheeler
  17. Talja, S.: ¬The social and discursive construction of computing skills (2005) 0.08
    0.084581554 = product of:
      0.16916311 = sum of:
        0.021389665 = weight(_text_:use in 4902) [ClassicSimilarity], result of:
          0.021389665 = score(doc=4902,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.1691581 = fieldWeight in 4902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4902)
        0.02087234 = weight(_text_:of in 4902) [ClassicSimilarity], result of:
          0.02087234 = score(doc=4902,freq=28.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.32322758 = fieldWeight in 4902, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4902)
        0.035869885 = product of:
          0.07173977 = sum of:
            0.07173977 = weight(_text_:line in 4902) [ClassicSimilarity], result of:
              0.07173977 = score(doc=4902,freq=2.0), product of:
                0.23157367 = queryWeight, product of:
                  5.6078424 = idf(docFreq=440, maxDocs=44218)
                  0.041294612 = queryNorm
                0.30979243 = fieldWeight in 4902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6078424 = idf(docFreq=440, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4902)
          0.5 = coord(1/2)
        0.091031216 = sum of:
          0.06305697 = weight(_text_:computers in 4902) [ClassicSimilarity], result of:
            0.06305697 = score(doc=4902,freq=2.0), product of:
              0.21710795 = queryWeight, product of:
                5.257537 = idf(docFreq=625, maxDocs=44218)
                0.041294612 = queryNorm
              0.29044062 = fieldWeight in 4902, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.257537 = idf(docFreq=625, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4902)
          0.02797425 = weight(_text_:22 in 4902) [ClassicSimilarity], result of:
            0.02797425 = score(doc=4902,freq=2.0), product of:
              0.1446067 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041294612 = queryNorm
              0.19345059 = fieldWeight in 4902, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4902)
      0.5 = coord(4/8)
    
    Abstract
    In this article a social constructionist approach to information technology (IT) literacy is introduced. This approach contributes to the literature an IT literacy by introducing the concept of IT self as a description of the momentary, context-dependent, and multilayered nature of interpretations of IT competencies. In the research litersture, IT literacy is offen defined as sets of basic skills to be learned, and competencies to be demonstrated. In line with this approach, research an IT competencies conventionally develops models for explaining user acceptance, and for measuring computer-related attitudes and skills. The assumption is that computerrelated attitudes and seif-efficacy impact IT adoption and success in computer use. Computer seif-efficacy measures are, however, often based an seif-assessments that measure interpretations of skills rather than performance in practice. An analysis of empirical interview data in which academic researchers discuss their relationships with computers and IT competence shows how a seif-assessment such as "computer anxiety" presented in one discussion context can in another discussion context be consigned to the past in favor of a different and more positive version. Here it is argued that descriptions of IT competencies and computer-related attitudes are dialogic social constructs and closely tied with more general implicit understandings of the nature of technical artifacts and technical knowledge. These implicit theories and assumptions are rarely taken under scrutiny in discussions of IT literacy yet they have profound implications for the aims and methods in teaching computer skills.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.1, S.13-22
  18. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.08
    0.081611864 = product of:
      0.16322373 = sum of:
        0.029519552 = weight(_text_:retrieval in 2541) [ClassicSimilarity], result of:
          0.029519552 = score(doc=2541,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23632148 = fieldWeight in 2541, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.02011309 = weight(_text_:of in 2541) [ClassicSimilarity], result of:
          0.02011309 = score(doc=2541,freq=26.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.31146988 = fieldWeight in 2541, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.0938103 = sum of:
          0.022070529 = weight(_text_:on in 2541) [ClassicSimilarity], result of:
            0.022070529 = score(doc=2541,freq=8.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.24300331 = fieldWeight in 2541, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.07173977 = weight(_text_:line in 2541) [ClassicSimilarity], result of:
            0.07173977 = score(doc=2541,freq=2.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.30979243 = fieldWeight in 2541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
        0.019780781 = product of:
          0.039561562 = sum of:
            0.039561562 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.039561562 = score(doc=2541,freq=4.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  19. Beheshti, J.; Bowler, L.; Large, A.; Nesset, V.: Towards an alternative information retrieval system for children (2005) 0.08
    0.0792543 = product of:
      0.12680689 = sum of:
        0.033397563 = weight(_text_:retrieval in 644) [ClassicSimilarity], result of:
          0.033397563 = score(doc=644,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.26736724 = fieldWeight in 644, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=644)
        0.04191501 = weight(_text_:use in 644) [ClassicSimilarity], result of:
          0.04191501 = score(doc=644,freq=12.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.33148083 = fieldWeight in 644, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=644)
        0.0154592255 = weight(_text_:of in 644) [ClassicSimilarity], result of:
          0.0154592255 = score(doc=644,freq=24.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.23940048 = fieldWeight in 644, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=644)
        0.010812307 = product of:
          0.021624614 = sum of:
            0.021624614 = weight(_text_:on in 644) [ClassicSimilarity], result of:
              0.021624614 = score(doc=644,freq=12.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.23809364 = fieldWeight in 644, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=644)
          0.5 = coord(1/2)
        0.025222788 = product of:
          0.050445575 = sum of:
            0.050445575 = weight(_text_:computers in 644) [ClassicSimilarity], result of:
              0.050445575 = score(doc=644,freq=2.0), product of:
                0.21710795 = queryWeight, product of:
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.041294612 = queryNorm
                0.2323525 = fieldWeight in 644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.03125 = fieldNorm(doc=644)
          0.5 = coord(1/2)
      0.625 = coord(5/8)
    
    Abstract
    A recent survey of more than 1700 households indicates that the main reason many parents purchase computers and connect their children to the Internet at home is for education (Safe and Smart). In addition the survey shows that children also use the Internet for educational activities that go beyond required school work. In fact, the fastest growing group of Internet users are children between the ages of eight and twelve (Vise, 2003), who are increasingly using the Web to access educational as well as entertainment materials. Children, however, rely on conventional information retrieval (IR) systems and search engines intended for general adult use, such as MSN or Google, and to a much lesser extent, Web portals such as Yahooligans! and LycosZone specifically intended for young users (Large et al., 2004; Large, Beheshti, and Rahman, 2002a). But research has shown that children's information needs (Walter, 1994), research approaches (Kuhlthau, 1991), and cognitive abilities and higher order thinking skills (Neuman, 1995; Siegler, 1998; Vandergrift, 1989) differ from those of adults. The results of earlier studies on children's use of online catalogues designed for adults indicate that young users are often faced with difficulties locating specific information related to their information needs (Hirsh, 1997). A growing body of research points to the problems children typically encounter when seeking information on the Web. Kafai and Bates (1997) conducted one of the first studies with young children on their use of Web sites, and concluded that they were able to navigate through the links and scroll. Only the older children, however, could use search engines effectively. Hirsh (1999) investigated the searching behavior of ten fifth graders and concluded that they encountered difficulties in formulating effective search queries and did not use advanced features. Schacter, Chung, and Dorr (1998) conducted a study on Internet searching by fifth and sixth graders and concluded that they did not plan their searches, used ill-defined queries, and preferred browsing. Large, Beheshti, and Moukdad (1999), investigating the information seeking behavior of 53 sixth graders, similarly found that children preferred browsing to searching. Bowler, Large, and Rejskind (2001), focusing on a few case studies of grade six students concluded that search engines designed for adults are unsuitable for children. Wallace et al. (2000), studying sixth graders, discovered that experience in using search engines does not improve children's search strategies and in general information seeking is an unfamiliar activity for children.
    Series
    The information retrieval series, vol. 19
    Source
    New directions in cognitive information retrieval. Eds.: A. Spink, C. Cole
  20. Cole, C.; Mandelblatt, B.: Using Kintsch's discourse comprehension theory to model the user's coding of an informative message from an enabling information retrieval system (2000) 0.08
    0.07916108 = product of:
      0.15832216 = sum of:
        0.029519552 = weight(_text_:retrieval in 5161) [ClassicSimilarity], result of:
          0.029519552 = score(doc=5161,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23632148 = fieldWeight in 5161, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5161)
        0.030249555 = weight(_text_:use in 5161) [ClassicSimilarity], result of:
          0.030249555 = score(doc=5161,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23922569 = fieldWeight in 5161, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5161)
        0.015778005 = weight(_text_:of in 5161) [ClassicSimilarity], result of:
          0.015778005 = score(doc=5161,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24433708 = fieldWeight in 5161, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5161)
        0.082775034 = sum of:
          0.0110352645 = weight(_text_:on in 5161) [ClassicSimilarity], result of:
            0.0110352645 = score(doc=5161,freq=2.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.121501654 = fieldWeight in 5161, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5161)
          0.07173977 = weight(_text_:line in 5161) [ClassicSimilarity], result of:
            0.07173977 = score(doc=5161,freq=2.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.30979243 = fieldWeight in 5161, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5161)
      0.5 = coord(4/8)
    
    Abstract
    With new interactive technology, information science can use its traditional information focus to increase user satisfaction by designing information retrieval systems (IRSs) that inform the user about her task, and help the user get the task done, while the user is on-line interacting with the system. By doing so, the system enables the user to perform the task for which the information is being sought. In previous articles, we modeled the information flow and coding operations of a user who has just received an informative IRS message, dividing the user's processing of the IRS message into three subsystem levels. In this article, we use Kintsch's proposition-based construction-integration theory of discourse comprehension to further detail the user coding operations that occur in each of the three subsystems. Our enabling devices are designed to facilitate a specific coding operation in a specific subsystem. In this article, we describe an IRS device made up of two separate parts that enable the user's (1) decoding and (2) encoding of an IRS message in the Comprehension subsystem
    Source
    Journal of the American Society for Information Science. 51(2000) no.11, S.1033-1046

Authors

Languages

Types

  • a 6236
  • m 677
  • el 504
  • s 230
  • x 70
  • b 41
  • r 32
  • i 27
  • n 18
  • p 16
  • More… Less…

Themes

Subjects

Classifications