Search (10293 results, page 1 of 515)

  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.24
    0.24013607 = product of:
      0.3001701 = sum of:
        0.068737626 = product of:
          0.20621286 = sum of:
            0.20621286 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20621286 = score(doc=562,freq=2.0), product of:
                0.36691502 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04327843 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.0076287217 = weight(_text_:a in 562) [ClassicSimilarity], result of:
          0.0076287217 = score(doc=562,freq=8.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.15287387 = fieldWeight in 562, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.20621286 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.20621286 = score(doc=562,freq=2.0), product of:
            0.36691502 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04327843 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.01759089 = product of:
          0.03518178 = sum of:
            0.03518178 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03518178 = score(doc=562,freq=2.0), product of:
                0.15155369 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04327843 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Type
    a
  2. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.22
    0.22301188 = product of:
      0.37168646 = sum of:
        0.09165016 = product of:
          0.27495047 = sum of:
            0.27495047 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.27495047 = score(doc=140,freq=2.0), product of:
                0.36691502 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04327843 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
        0.0050858143 = weight(_text_:a in 140) [ClassicSimilarity], result of:
          0.0050858143 = score(doc=140,freq=2.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.10191591 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
        0.27495047 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.27495047 = score(doc=140,freq=2.0), product of:
            0.36691502 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04327843 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.6 = coord(3/5)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
    Type
    a
  3. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.20
    0.19780545 = product of:
      0.32967573 = sum of:
        0.08019389 = product of:
          0.24058168 = sum of:
            0.24058168 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.24058168 = score(doc=306,freq=2.0), product of:
                0.36691502 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04327843 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.008900175 = weight(_text_:a in 306) [ClassicSimilarity], result of:
          0.008900175 = score(doc=306,freq=8.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.17835285 = fieldWeight in 306, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.24058168 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.24058168 = score(doc=306,freq=2.0), product of:
            0.36691502 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04327843 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.6 = coord(3/5)
    
    Abstract
    Although service-oriented architectures go a long way toward providing interoperability in distributed, heterogeneous environments, managing semantic differences in such environments remains a challenge. We give an overview of the issue of semantic interoperability (integration), provide a semantic characterization of services, and discuss the role of ontologies. Then we analyze four basic models of semantic interoperability that differ in respect to their mapping between service descriptions and ontologies and in respect to where the evaluation of the integration logic is performed. We also provide some guidelines for selecting one of the possible interoperability models.
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
    Type
    a
  4. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.17
    0.17144348 = product of:
      0.28573912 = sum of:
        0.068737626 = product of:
          0.20621286 = sum of:
            0.20621286 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.20621286 = score(doc=2918,freq=2.0), product of:
                0.36691502 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04327843 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.010788641 = weight(_text_:a in 2918) [ClassicSimilarity], result of:
          0.010788641 = score(doc=2918,freq=16.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.2161963 = fieldWeight in 2918, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.20621286 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.20621286 = score(doc=2918,freq=2.0), product of:
            0.36691502 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04327843 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.6 = coord(3/5)
    
    Abstract
    The employees of an organization often use a personal hierarchical classification scheme to organize digital documents that are stored on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk that the organization will lose track of needed documentation. Furthermore, the inherent boundaries of such a hierarchical structure require making arbitrary decisions about which specific criteria the classification will b.e based on (for instance, the administrative activity or the document type, although a document can have several attributes and require classification in several classes).A faceted classification model to support corporate information organization is proposed. Partially based on Ranganathan's facets theory, this model aims not only to standardize the organization of digital documents, but also to simplify the management of a document throughout its life cycle for both individuals and organizations, while ensuring compliance to regulatory and policy requirements.
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
    Type
    a
  5. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.14
    0.13938242 = product of:
      0.23230404 = sum of:
        0.057281353 = product of:
          0.17184405 = sum of:
            0.17184405 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.17184405 = score(doc=5895,freq=2.0), product of:
                0.36691502 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04327843 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.0031786337 = weight(_text_:a in 5895) [ClassicSimilarity], result of:
          0.0031786337 = score(doc=5895,freq=2.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.06369744 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.17184405 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.17184405 = score(doc=5895,freq=2.0), product of:
            0.36691502 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04327843 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.6 = coord(3/5)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
    Type
    a
  6. Shafer, K.E.: ARMs, OCLC Internet Services, and PURLs (2001) 0.12
    0.12383317 = product of:
      0.30958292 = sum of:
        0.0076287217 = weight(_text_:a in 1058) [ClassicSimilarity], result of:
          0.0076287217 = score(doc=1058,freq=2.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.15287387 = fieldWeight in 1058, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=1058)
        0.3019542 = weight(_text_:391 in 1058) [ClassicSimilarity], result of:
          0.3019542 = score(doc=1058,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.96178484 = fieldWeight in 1058, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.09375 = fieldNorm(doc=1058)
      0.4 = coord(2/5)
    
    Source
    Journal of library administration. 34(2001) nos.3/4, S.385-391
    Type
    a
  7. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.12
    0.11645339 = product of:
      0.19408897 = sum of:
        0.04582508 = product of:
          0.13747524 = sum of:
            0.13747524 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13747524 = score(doc=701,freq=2.0), product of:
                0.36691502 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04327843 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.010788641 = weight(_text_:a in 701) [ClassicSimilarity], result of:
          0.010788641 = score(doc=701,freq=36.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.2161963 = fieldWeight in 701, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.13747524 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13747524 = score(doc=701,freq=2.0), product of:
            0.36691502 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04327843 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.6 = coord(3/5)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  8. Schüler, P.: Wertes Wissen : Knowledge Management vermeidet Datenfriedhöfe (2001) 0.10
    0.09775161 = product of:
      0.24437901 = sum of:
        0.0050858143 = weight(_text_:a in 6815) [ClassicSimilarity], result of:
          0.0050858143 = score(doc=6815,freq=2.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.10191591 = fieldWeight in 6815, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6815)
        0.2392932 = sum of:
          0.19238417 = weight(_text_:schlagwort in 6815) [ClassicSimilarity], result of:
            0.19238417 = score(doc=6815,freq=2.0), product of:
              0.30691838 = queryWeight, product of:
                7.0917172 = idf(docFreq=99, maxDocs=44218)
                0.04327843 = queryNorm
              0.62682515 = fieldWeight in 6815, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.0917172 = idf(docFreq=99, maxDocs=44218)
                0.0625 = fieldNorm(doc=6815)
          0.046909038 = weight(_text_:22 in 6815) [ClassicSimilarity], result of:
            0.046909038 = score(doc=6815,freq=2.0), product of:
              0.15155369 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04327843 = queryNorm
              0.30952093 = fieldWeight in 6815, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6815)
      0.4 = coord(2/5)
    
    Abstract
    Wer ohne spezielle Vorkenntnisse schnell in einem Thema Fuß fassen will, ist auf intelligente Recherchierhilfen angewiesen. Gurus der künstlichen Intelligenz kennen schon langm Wege, die Datenwelt besser als mit Schlagwort-Suchmaschinen nach Inhalten zu durchforsten - nur in der Praxis war davon wenig zu sehen. Aktuelle Software zum Content-Retrieval will die scheinbare Utopie verwirklichen
    Date
    8.11.2001 19:58:22
    Type
    a
  9. Prasad, K.N.: Digital divide in India - narrowing the gap : an appraisal with special reference to Karnataka (2006) 0.08
    0.08404469 = product of:
      0.2101117 = sum of:
        0.008808889 = weight(_text_:a in 1506) [ClassicSimilarity], result of:
          0.008808889 = score(doc=1506,freq=6.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.17652355 = fieldWeight in 1506, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1506)
        0.20130281 = weight(_text_:391 in 1506) [ClassicSimilarity], result of:
          0.20130281 = score(doc=1506,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.6411899 = fieldWeight in 1506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0625 = fieldNorm(doc=1506)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents a report of the projects and programmes far narrowing the digital divide in India with special reference to the State of Karnataka. The various endeavours of the governments and non-government organizations in the country in creating awareness in rural areas especially through information kiosks are highlighted. Such activities and programmes in the State of Karnataka are briefly described.
    Pages
    S.391-420
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
    Type
    a
  10. Hotho, A.; Jäschke, R.; Benz, D.; Grahl, M.; Krause, B.; Schmitz, C.; Stumme, G.: Social Bookmarking am Beispiel BibSonomy (2009) 0.08
    0.08404469 = product of:
      0.2101117 = sum of:
        0.008808889 = weight(_text_:a in 4873) [ClassicSimilarity], result of:
          0.008808889 = score(doc=4873,freq=6.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.17652355 = fieldWeight in 4873, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4873)
        0.20130281 = weight(_text_:391 in 4873) [ClassicSimilarity], result of:
          0.20130281 = score(doc=4873,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.6411899 = fieldWeight in 4873, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0625 = fieldNorm(doc=4873)
      0.4 = coord(2/5)
    
    Pages
    S.365-391
    Source
    Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
    Type
    a
  11. Bosschieter, P.: Translate the index or index the translation? (2007) 0.08
    0.08255545 = product of:
      0.20638862 = sum of:
        0.0050858143 = weight(_text_:a in 736) [ClassicSimilarity], result of:
          0.0050858143 = score(doc=736,freq=2.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.10191591 = fieldWeight in 736, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=736)
        0.20130281 = weight(_text_:391 in 736) [ClassicSimilarity], result of:
          0.20130281 = score(doc=736,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.6411899 = fieldWeight in 736, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0625 = fieldNorm(doc=736)
      0.4 = coord(2/5)
    
    Source
    Information - Wissenschaft und Praxis. 58(2007) H.8, S.391-393
    Type
    a
  12. Maaten, L. van den: Learning a parametric embedding by preserving local structure (2009) 0.07
    0.07443626 = product of:
      0.18609065 = sum of:
        0.009950698 = weight(_text_:a in 3883) [ClassicSimilarity], result of:
          0.009950698 = score(doc=3883,freq=10.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.19940455 = fieldWeight in 3883, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3883)
        0.17613995 = weight(_text_:391 in 3883) [ClassicSimilarity], result of:
          0.17613995 = score(doc=3883,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.5610411 = fieldWeight in 3883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3883)
      0.4 = coord(2/5)
    
    Abstract
    The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.
    Source
    Proceedings of the Twelfth International Conference on Artificial Intelligence & Statistics (AI-STATS), JMLR W&CP 5, 2009. S.384-391
    Type
    a
  13. Garfield, E.; Paris, S.W.; Stock, W.G.: HistCite(TM) : a software tool for informetric analysis of citation linkage (2006) 0.07
    0.07353909 = product of:
      0.18384773 = sum of:
        0.007707778 = weight(_text_:a in 79) [ClassicSimilarity], result of:
          0.007707778 = score(doc=79,freq=6.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.1544581 = fieldWeight in 79, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=79)
        0.17613995 = weight(_text_:391 in 79) [ClassicSimilarity], result of:
          0.17613995 = score(doc=79,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.5610411 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0546875 = fieldNorm(doc=79)
      0.4 = coord(2/5)
    
    Abstract
    HistCite(TM) is a software tool for analyzing and visualizing direct citation linkages between scientific papers. Its inputs are bibliographic records (with cited references) from "Web of Knowledge" or other sources. Its outputs are various tables and graphs with informetric indicators about the knowledge domain under study. As an example we analyze informetrically the literature about Alexius Meinong, an Austrian philosopher and psychologist. The article shortly discusses the informetric functionality of "Web of Knowledge" and shows broadly the possibilities that HistCite offers to its users (e.g. scientists, scientometricans and science journalists).
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.8, S.391-400
    Type
    a
  14. Pirkola, A.; Puolamäki, D.; Järvelin, K.: Applying query structuring in cross-language retrieval (2003) 0.06
    0.06412814 = product of:
      0.16032034 = sum of:
        0.009343237 = weight(_text_:a in 1074) [ClassicSimilarity], result of:
          0.009343237 = score(doc=1074,freq=12.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.18723148 = fieldWeight in 1074, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1074)
        0.1509771 = weight(_text_:391 in 1074) [ClassicSimilarity], result of:
          0.1509771 = score(doc=1074,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.48089242 = fieldWeight in 1074, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.046875 = fieldNorm(doc=1074)
      0.4 = coord(2/5)
    
    Abstract
    We will explore various ways to apply query structuring in cross-language information retrieval. In the first test, English queries were translated into Finnish using an electronic dictionary, and were run in a Finnish newspaper database of 55,000 articles. Queries were structured by combining the Finnish translation equivalents of the same English query key using the syn-operator of the InQuery retrieval system. Structured queries performed markedly better than unstructured queries. Second, the effects of compound-based structuring using a proximity operator for the translation equivalents of query language compound components were tested. The method was not useful in syn-based queries but resulted in decrease in retrieval effectiveness. Proper names are often non-identical spelling variants in different languages. This allows n-gram based translation of names not included in a dictionary. In the third test, a query structuring method where the Boolean and-operator was used to assign more weight to keys translated through n-gram matching gave good results.
    Source
    Information processing and management. 39(2003) no.3, S.391-402
    Type
    a
  15. Alemayehu, N.: Analysis of performance variation using quey expansion (2003) 0.06
    0.06380251 = product of:
      0.15950628 = sum of:
        0.00852917 = weight(_text_:a in 1454) [ClassicSimilarity], result of:
          0.00852917 = score(doc=1454,freq=10.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.1709182 = fieldWeight in 1454, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1454)
        0.1509771 = weight(_text_:391 in 1454) [ClassicSimilarity], result of:
          0.1509771 = score(doc=1454,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.48089242 = fieldWeight in 1454, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.046875 = fieldNorm(doc=1454)
      0.4 = coord(2/5)
    
    Abstract
    Information retrieval performance evaluation is commonly made based an the classical recall and precision based figures or graphs. However, important information indicating causes for variation may remain hidden under the average recall and precision figures. Identifying significant causes for variation can help researchers and developers to focus an opportunities for improvement that underlay the averages. This article presents a case study showing the potential of a statistical repeated measures analysis of variance for testing the significance of factors in retrieval performance variation. The TREC-9 Query Track performance data is used as a case study and the factors studied are retrieval method, topic, and their interaction. The results show that retrieval method, topic, and their interaction are all significant. A topic level analysis is also made to see the nature of variation in the performance of retrieval methods across topics. The observed retrieval performances of expansion runs are truly significant improvements for most of the topics. Analyses of the effect of query expansion an document ranking confirm that expansion affects ranking positively.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.5, S.379-391
    Type
    a
  16. Garshol, L.M.: Metadata? Thesauri? Taxonomies? Topic Maps! : making sense of it all (2005) 0.06
    0.06303351 = product of:
      0.15758377 = sum of:
        0.006606667 = weight(_text_:a in 4729) [ClassicSimilarity], result of:
          0.006606667 = score(doc=4729,freq=6.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.13239266 = fieldWeight in 4729, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4729)
        0.1509771 = weight(_text_:391 in 4729) [ClassicSimilarity], result of:
          0.1509771 = score(doc=4729,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.48089242 = fieldWeight in 4729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.046875 = fieldNorm(doc=4729)
      0.4 = coord(2/5)
    
    Abstract
    The task of an information architect is to create web sites where users can actually find the information they are looking for. As the ocean of information rises and leaves what we seek ever more deeply buried in what we don't seek, this discipline becomes ever more relevant. Information architecture involves many different aspects of web site creation and organization, but its principal tools are information organization techniques developed in other disciplines. Most of these techniques come from library science, such as thesauri, taxonomies, and faceted classification. Topic maps are a relative newcomer to this area and bring with them the promise of better-organized web sites, compared to what is possible with existing techniques. However, it is not generally understood how topic maps relate to the traditional techniques, and what advantages and disadvantages they have, compared to these techniques. The aim of this paper is to help build a better understanding of these issues.
    Source
    Journal of information science. 30(2005) no.4, S.378-391
    Type
    a
  17. Howarth, L.C.: Designing a "Human Understandable" metalevel ontology for enhancing resource discovery in knowledge bases (2000) 0.05
    0.05434639 = product of:
      0.13586597 = sum of:
        0.010051723 = weight(_text_:a in 114) [ClassicSimilarity], result of:
          0.010051723 = score(doc=114,freq=20.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.20142901 = fieldWeight in 114, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=114)
        0.12581424 = weight(_text_:391 in 114) [ClassicSimilarity], result of:
          0.12581424 = score(doc=114,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.40074366 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0390625 = fieldNorm(doc=114)
      0.4 = coord(2/5)
    
    Abstract
    With the explosion of digitized resources accessible via networked information systems, and the corresponding proliferation of general purpose and domain-specific schemes, metadata have assumed a special prominence. While recent work emanating from the World Wide Web Consortium (W3C) has focused on the Resource Description Framework (RDF) to support the interoperability of metadata standards - thus converting metatags from diverse domains from merely "machine-readable" to "machine-understandable" - the next iteration, to "human-understandable," remains a challenge. This apparent gap provides a framework for three-phase research (Howarth, 1999) to develop a tool which will provide a "human-understandable" front-end search assist to any XML-compliant metadata scheme. Findings from phase one, the analyses and mapping of seven metadata schemes, identify the particular challenges of designing a common "namespace", populated with element tags which are appropriately descriptive, yet readily understood by a lay searcher, when there is little congruence within, and a high degree of variability across, the metadata schemes under study. Implications for the subsequent design and testing of both the proposed "metalevel ontology" (phase two), and the prototype search assist tool (phase three) are examined
    Pages
    S.391-397
    Type
    a
  18. Silva, A.J.C.; Gonçalves, M.A.; Laender, A.H.F.; Modesto, M.A.B.; Cristo, M.; Ziviani, N.: Finding what is missing from a digital library : a case study in the computer science field (2009) 0.05
    0.053921916 = product of:
      0.13480479 = sum of:
        0.008990535 = weight(_text_:a in 4219) [ClassicSimilarity], result of:
          0.008990535 = score(doc=4219,freq=16.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.18016359 = fieldWeight in 4219, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4219)
        0.12581424 = weight(_text_:391 in 4219) [ClassicSimilarity], result of:
          0.12581424 = score(doc=4219,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.40074366 = fieldWeight in 4219, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4219)
      0.4 = coord(2/5)
    
    Abstract
    This article proposes a process to retrieve the URL of a document for which metadata records exist in a digital library catalog but a pointer to the full text of the document is not available. The process uses results from queries submitted to Web search engines for finding the URL of the corresponding full text or any related material. We present a comprehensive study of this process in different situations by investigating different query strategies applied to three general purpose search engines (Google, Yahoo!, MSN) and two specialized ones (Scholar and CiteSeer), considering five user scenarios. Specifically, we have conducted experiments with metadata records taken from the Brazilian Digital Library of Computing (BDBComp) and The DBLP Computer Science Bibliography (DBLP). We found that Scholar was the most effective search engine for this task in all considered scenarios and that simple strategies for combining and re-ranking results from Scholar and Google significantly improve the retrieval quality. Moreover, we study the influence of the number of query results on the effectiveness of finding missing information as well as the coverage of the proposed scenarios.
    Source
    Information processing and management. 45(2009) no.3, S.380-391
    Type
    a
  19. Ribeiro-Neto, B.; Laender, A.H.F.; Lima, L.R.S. de: ¬An experimental study in automatically categorizing medical documents (2001) 0.05
    0.053440113 = product of:
      0.13360028 = sum of:
        0.0077860313 = weight(_text_:a in 5702) [ClassicSimilarity], result of:
          0.0077860313 = score(doc=5702,freq=12.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.15602624 = fieldWeight in 5702, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5702)
        0.12581424 = weight(_text_:391 in 5702) [ClassicSimilarity], result of:
          0.12581424 = score(doc=5702,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.40074366 = fieldWeight in 5702, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5702)
      0.4 = coord(2/5)
    
    Abstract
    In this article, we evaluate the retrieval performance of an algorithm that automatically categorizes medical documents. The categorization, which consists in assigning an International Code of Disease (ICD) to the medical document under examination, is based on wellknown information retrieval techniques. The algorithm, which we proposed, operates in a fully automatic mode and requires no supervision or training data. Using a database of 20,569 documents, we verify that the algorithm attains levels of average precision in the 70-80% range for category coding and in the 60-70% range for subcategory coding. We also carefully analyze the case of those documents whose categorization is not in accordance with the one provided by the human specialists. The vast majority of them represent cases that can only be fully categorized with the assistance of a human subject (because, for instance, they require specific knowledge of a given pathology). For a slim fraction of all documents (0.77% for category coding and 1.4% for subcategory coding), the algorithm makes assignments that are clearly incorrect. However, this fraction corresponds to only one-fourth of the mistakes made by the human specialists
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.5, S.391-401
    Type
    a
  20. Chua, A.Y.K.; Kaynak, S.; Foo, S.S.B.: ¬An analysis of the delayed response to hurricane Katrina through the lens of knowledge management (2007) 0.05
    0.053168755 = product of:
      0.13292189 = sum of:
        0.007107642 = weight(_text_:a in 144) [ClassicSimilarity], result of:
          0.007107642 = score(doc=144,freq=10.0), product of:
            0.049902063 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04327843 = queryNorm
            0.14243183 = fieldWeight in 144, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=144)
        0.12581424 = weight(_text_:391 in 144) [ClassicSimilarity], result of:
          0.12581424 = score(doc=144,freq=2.0), product of:
            0.31395194 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.04327843 = queryNorm
            0.40074366 = fieldWeight in 144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0390625 = fieldNorm(doc=144)
      0.4 = coord(2/5)
    
    Abstract
    In contrast to many recent large-scale catastrophic events, such as the Turkish earthquake in 1999, the 9/11 attack in New York in 2001, the Bali Bombing in 2002, and the Asian Tsunami in 2004, the initial rescue effort towards Hurricane Katrina in the U.S. in 2005 had been sluggish. Even as Congress has promised to convene a formal inquiry into the response to Katrina, this article offers another perspective by analyzing the delayed response through the lens of knowledge management (KM). A KM framework situated in the context of disaster management is developed to study three distinct but overlapping KM processes, namely, knowledge creation, knowledge transfer, and knowledge reuse. Drawing from a total of more than 400 documents - including local, national, and foreign news articles, newswires, congressional reports, and television interview transcripts, as well as Internet resources such as wikipedia and blogs - 14 major delay causes in Katrina are presented. The extent to which the delay causes were a result of the lapses in KM processes within and across the government agencies are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.3, S.391-403
    Type
    a

Authors

Languages

Types

  • a 9260
  • m 647
  • el 500
  • s 209
  • x 52
  • b 40
  • i 28
  • r 28
  • n 16
  • p 10
  • l 1
  • More… Less…

Themes

Subjects

Classifications