Search (1774 results, page 1 of 89)

  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.24
    0.2433096 = sum of:
      0.05978776 = product of:
        0.23915105 = sum of:
          0.23915105 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23915105 = score(doc=562,freq=2.0), product of:
              0.425522 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.050191253 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.18352184 = sum of:
        0.1427205 = weight(_text_:q in 562) [ClassicSimilarity], result of:
          0.1427205 = score(doc=562,freq=2.0), product of:
            0.32872224 = queryWeight, product of:
              6.5493927 = idf(docFreq=171, maxDocs=44218)
              0.050191253 = queryNorm
            0.43416747 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5493927 = idf(docFreq=171, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.04080133 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
          0.04080133 = score(doc=562,freq=2.0), product of:
            0.17576122 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.050191253 = queryNorm
            0.23214069 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.17
    0.16734098 = sum of:
      0.014406114 = product of:
        0.057624456 = sum of:
          0.057624456 = weight(_text_:authors in 5290) [ClassicSimilarity], result of:
            0.057624456 = score(doc=5290,freq=2.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.25184128 = fieldWeight in 5290, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5290)
        0.25 = coord(1/4)
      0.15293486 = sum of:
        0.11893375 = weight(_text_:q in 5290) [ClassicSimilarity], result of:
          0.11893375 = score(doc=5290,freq=2.0), product of:
            0.32872224 = queryWeight, product of:
              6.5493927 = idf(docFreq=171, maxDocs=44218)
              0.050191253 = queryNorm
            0.3618062 = fieldWeight in 5290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5493927 = idf(docFreq=171, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5290)
        0.034001112 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
          0.034001112 = score(doc=5290,freq=2.0), product of:
            0.17576122 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.050191253 = queryNorm
            0.19345059 = fieldWeight in 5290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5290)
    
    Abstract
    Document keyphrases provide a concise summary of a document's content, offering semantic metadata summarizing a document. They can be used in many applications related to knowledge management and text mining, such as automatic text summarization, development of search engines, document clustering, document classification, thesaurus construction, and browsing interfaces. Because only a small portion of documents have keyphrases assigned by authors, and it is time-consuming and costly to manually assign keyphrases to documents, it is necessary to develop an algorithm to automatically generate keyphrases for documents. This paper describes a Keyphrase Identification Program (KIP), which extracts document keyphrases by using prior positive samples of human identified phrases to assign weights to the candidate keyphrases. The logic of our algorithm is: The more keywords a candidate keyphrase contains and the more significant these keywords are, the more likely this candidate phrase is a keyphrase. KIP's learning function can enrich the glossary database by automatically adding new identified keyphrases to the database. KIP's personalization feature will let the user build a glossary database specifically suitable for the area of his/her interest. The evaluation results show that KIP's performance is better than the systems we compared to and that the learning function is effective.
    Date
    22. 7.2006 17:25:48
  3. Kim, S.; Oh, S.: Users' relevance criteria for evaluating answers in a social Q&A site (2009) 0.14
    0.13593431 = product of:
      0.27186862 = sum of:
        0.27186862 = sum of:
          0.2378675 = weight(_text_:q in 2756) [ClassicSimilarity], result of:
            0.2378675 = score(doc=2756,freq=8.0), product of:
              0.32872224 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.050191253 = queryNorm
              0.7236124 = fieldWeight in 2756, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2756)
          0.034001112 = weight(_text_:22 in 2756) [ClassicSimilarity], result of:
            0.034001112 = score(doc=2756,freq=2.0), product of:
              0.17576122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050191253 = queryNorm
              0.19345059 = fieldWeight in 2756, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2756)
      0.5 = coord(1/2)
    
    Abstract
    This study examines the criteria questioners use to select the best answers in a social Q&A site (Yahoo! Answers) within the theoretical framework of relevance research. A social Q&A site is a novel environment where people voluntarily ask and answer questions. In Yahoo! Answers, the questioner selects the answer that best satisfies his or her question and leaves comments on it. Under the assumption that the comments reflect the reasons why questioners select particular answers as the best, this study analyzed 2,140 comments collected from Yahoo! Answers during December 2007. The content analysis identified 23 individual relevance criteria in six classes: Content, Cognitive, Utility, Information Sources, Extrinsic, and Socioemotional. A major finding is that the selection criteria used in a social Q&A site have considerable overlap with many relevance criteria uncovered in previous relevance studies, but that the scope of socio-emotional criteria has been expanded to include the social aspect of this environment. Another significant finding is that the relative importance of individual criteria varies according to topic categories. Socioemotional criteria are popular in discussion-oriented categories, content-oriented criteria in topic-oriented categories, and utility criteria in self-help categories. This study generalizes previous relevance studies to a new environment by going beyond an academic setting.
    Date
    22. 3.2009 18:57:23
  4. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.13
    0.12768295 = product of:
      0.2553659 = sum of:
        0.2553659 = product of:
          0.5107318 = sum of:
            0.19929254 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
              0.19929254 = score(doc=692,freq=2.0), product of:
                0.425522 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050191253 = queryNorm
                0.46834838 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
            0.3114393 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
              0.3114393 = score(doc=692,freq=2.0), product of:
                0.5319407 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.050191253 = queryNorm
                0.5854775 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(2/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
  5. Chen, Z.; Fu, B.: On the complexity of Rocchio's similarity-based relevance feedback algorithm (2007) 0.10
    0.10447219 = sum of:
      0.020373322 = product of:
        0.08149329 = sum of:
          0.08149329 = weight(_text_:authors in 578) [ClassicSimilarity], result of:
            0.08149329 = score(doc=578,freq=4.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.35615736 = fieldWeight in 578, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=578)
        0.25 = coord(1/4)
      0.08409887 = product of:
        0.16819774 = sum of:
          0.16819774 = weight(_text_:q in 578) [ClassicSimilarity], result of:
            0.16819774 = score(doc=578,freq=4.0), product of:
              0.32872224 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.050191253 = queryNorm
              0.5116713 = fieldWeight in 578, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=578)
        0.5 = coord(1/2)
    
    Abstract
    Rocchio's similarity-based relevance feedback algorithm, one of the most important query reformation methods in information retrieval, is essentially an adaptive learning algorithm from examples in searching for documents represented by a linear classifier. Despite its popularity in various applications, there is little rigorous analysis of its learning complexity in literature. In this article, the authors prove for the first time that the learning complexity of Rocchio's algorithm is O(d + d**2(log d + log n)) over the discretized vector space {0, ... , n - 1 }**d when the inner product similarity measure is used. The upper bound on the learning complexity for searching for documents represented by a monotone linear classifier (q, 0) over {0, ... , n - 1 }d can be improved to, at most, 1 + 2k (n - 1) (log d + log(n - 1)), where k is the number of nonzero components in q. Several lower bounds on the learning complexity are also obtained for Rocchio's algorithm. For example, the authors prove that Rocchio's algorithm has a lower bound Omega((d über 2)log n) on its learning complexity over the Boolean vector space {0,1}**d.
  6. Kretschmer, H.; Kretschmer, T.: Well-ordered collaboration structures of co-author pairs in journals (2006) 0.10
    0.09850498 = sum of:
      0.014406114 = product of:
        0.057624456 = sum of:
          0.057624456 = weight(_text_:authors in 25) [ClassicSimilarity], result of:
            0.057624456 = score(doc=25,freq=2.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.25184128 = fieldWeight in 25, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=25)
        0.25 = coord(1/4)
      0.08409887 = product of:
        0.16819774 = sum of:
          0.16819774 = weight(_text_:q in 25) [ClassicSimilarity], result of:
            0.16819774 = score(doc=25,freq=4.0), product of:
              0.32872224 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.050191253 = queryNorm
              0.5116713 = fieldWeight in 25, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=25)
        0.5 = coord(1/2)
    
    Abstract
    In single-authored bibliographies only single scientist distribution can be found. But in multi-authored bibliographies single scientists distribution, pairs distribution, triples distribution, etc., can be presented. Whereas regarding Lotka's law single scientists P distribution (both in single-authored and in multi-authored bibliographies) is of interest, in the future pairs P, Q distribution, triples P, Q, R distribution, etc. should be considered Starting with pair distribution, the following question arises in the present paper: Is there also any regularity or well-ordered structure for the distribution of coauthor pairs in journals in analogy to Lotka's law for the distribution of single authors? Usually, in information science "laws " or "regularities " (for example Lotka's law) are mathematical descriptions of observed data inform of functions; however explanations of these phenomena are mostly missing. By contrast, in this paper the derivation of a formula for describing the distribution of the number of co-author pairs will be presented based on wellknown regularities in socio psychology or sociology in conjunction with the Gestalt theory as explanation for well-ordered collaboration structures and production of scientific literature, as well as derivations from Lotka's law. The assumed regularities for the distribution of co-author pairs in journals could be shown in the co-authorship data (1980-1998) of the journals Science, Nature, Proc Nat Acad Sci USA and Phys Rev B Condensed Matter.
  7. Pera, M.S.; Lund, W.; Ng, Y.-K.: ¬A sophisticated library search strategy using folksonomies and similarity matching (2009) 0.10
    0.09850498 = sum of:
      0.014406114 = product of:
        0.057624456 = sum of:
          0.057624456 = weight(_text_:authors in 2939) [ClassicSimilarity], result of:
            0.057624456 = score(doc=2939,freq=2.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.25184128 = fieldWeight in 2939, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2939)
        0.25 = coord(1/4)
      0.08409887 = product of:
        0.16819774 = sum of:
          0.16819774 = weight(_text_:q in 2939) [ClassicSimilarity], result of:
            0.16819774 = score(doc=2939,freq=4.0), product of:
              0.32872224 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.050191253 = queryNorm
              0.5116713 = fieldWeight in 2939, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2939)
        0.5 = coord(1/2)
    
    Abstract
    Libraries, private and public, offer valuable resources to library patrons. As of today, the only way to locate information archived exclusively in libraries is through their catalogs. Library patrons, however, often find it difficult to formulate a proper query, which requires using specific keywords assigned to different fields of desired library catalog records, to obtain relevant results. These improperly formulated queries often yield irrelevant results or no results at all. This negative experience in dealing with existing library systems turns library patrons away from directly querying library catalogs; instead, they rely on Web search engines to perform their searches first, and upon obtaining the initial information (e.g., titles, subject headings, or authors) on the desired library materials, they query library catalogs. This searching strategy is an evidence of failure of today's library systems. In solving this problem, we propose an enhanced library system, which allows partial, similarity matching of (a) tags defined by ordinary users at a folksonomy site that describe the content of books and (b) unrestricted keywords specified by an ordinary library patron in a query to search for relevant library catalog records. The proposed library system allows patrons posting a query Q using commonly used words and ranks the retrieved results according to their degrees of resemblance with Q while maintaining the query processing time comparable with that achieved by current library search engines.
  8. Gödert, W.; Hubrich, J.; Boteram, F.: Thematische Recherche und Interoperabilität : Wege zur Optimierung des Zugriffs auf heterogen erschlossene Dokumente (2009) 0.09
    0.09486038 = sum of:
      0.07785983 = product of:
        0.3114393 = sum of:
          0.3114393 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
            0.3114393 = score(doc=193,freq=2.0), product of:
              0.5319407 = queryWeight, product of:
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.050191253 = queryNorm
              0.5854775 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.25 = coord(1/4)
      0.017000556 = product of:
        0.034001112 = sum of:
          0.034001112 = weight(_text_:22 in 193) [ClassicSimilarity], result of:
            0.034001112 = score(doc=193,freq=2.0), product of:
              0.17576122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050191253 = queryNorm
              0.19345059 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.5 = coord(1/2)
    
    Source
    https://opus4.kobv.de/opus4-bib-info/frontdoor/index/index/searchtype/authorsearch/author/%22Hubrich%2C+Jessica%22/docId/703/start/0/rows/20
  9. Gao, Q.: Visual knowledge representation for three-dimensional computing vision (2000) 0.08
    0.08325363 = product of:
      0.16650726 = sum of:
        0.16650726 = product of:
          0.33301452 = sum of:
            0.33301452 = weight(_text_:q in 4673) [ClassicSimilarity], result of:
              0.33301452 = score(doc=4673,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                1.0130575 = fieldWeight in 4673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Heidorn, P.B.; Wei, Q.: Automatic metadata extraction from museum specimen labels (2008) 0.08
    0.07646743 = product of:
      0.15293486 = sum of:
        0.15293486 = sum of:
          0.11893375 = weight(_text_:q in 2624) [ClassicSimilarity], result of:
            0.11893375 = score(doc=2624,freq=2.0), product of:
              0.32872224 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.050191253 = queryNorm
              0.3618062 = fieldWeight in 2624, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2624)
          0.034001112 = weight(_text_:22 in 2624) [ClassicSimilarity], result of:
            0.034001112 = score(doc=2624,freq=2.0), product of:
              0.17576122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050191253 = queryNorm
              0.19345059 = fieldWeight in 2624, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2624)
      0.5 = coord(1/2)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  11. Neue Suchmaschine von Q-Sensei ermöglicht mehrdimensionales Navigieren (2009) 0.06
    0.06293383 = product of:
      0.12586766 = sum of:
        0.12586766 = product of:
          0.25173533 = sum of:
            0.25173533 = weight(_text_:q in 2825) [ClassicSimilarity], result of:
              0.25173533 = score(doc=2825,freq=14.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.7657995 = fieldWeight in 2825, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2825)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Mit dem Ziel, wissenschaftliche Informationen auf eine neue, effizientere Art und Weise zugänglich zu machen, startet die neue Suchmaschine von Q-Sensei, die im Vergleich zu anderen Suchdiensten ein tiefergehendes, komfortableres und präziseres Finden ermöglicht. Die neue Suchmaschine bietet ein multilineares Interface, welches es den Nutzern erlaubt, jederzeit ihre Suche zu steuern, eigene Parameter zu definieren und einen umfassenden Überblick im Zugriff auf Wissen zu behalten. Q-Sensei bietet aktuell Zugang zu sieben Millionen wissenschaftlichen Artikeln, die mit großer Genauigkeit effektiv durchsucht werden können. Erreicht wird das durch die Analyse der Suchergebnisse, wodurch passend zu jeder Suchanfrage automatisch relevante Suchvorschläge angezeigt werden. Diese können wiederum selbst durchsucht werden, was den Nutzern größere Freiheiten bei der Suche bietet als dies bei anderen Suchmaschinen der Fall ist. Die Q-Sensei Technologie verbindet verschiedene Kategorien von Suchvorschlägen, wie z.B. Autor, Stichworte, Sprache und Jahr der Veröffentlichung miteinander, wodurch ein mehrdimensionales Navigieren möglich wird. Durch die Möglichkeit, Suchvorschläge beliebig miteinander zu kombinieren, hinzuzufügen und zu entfernen, können Nutzer ihre Suche jederzeit bequem erweitern und anpassen und so auch Literatur finden, die ihnen ansonsten entgangen wäre.
    Sobald Nutzer die gewünschten Ergebnisse gefunden haben, können sie auf weitere Informationen zu jedem Treffer zugreifen. Dazu zählen Zitate, Webseiten von Herausgebern oder verwandte Wikipedia-Artikel. Außerdem werden weitere verwandte Themen oder Einträge aus der Q-Sensei-Datenbank angezeigt, die als Ausgangspunkt für eine neue Suche dienen können. Ferner haben alle Nutzer die Möglichkeit, Einträge mit eigenen Daten anzureichern oder zu ändern, sowie weitere relevante Informationen wie Webseiten von Autoren oder Zitate im Wiki-Stil einzutragen. Die Q-Sensei Corp. wurde im April 2007 durch den Zusammenschluss der in Deutschland ansässigen Lalisio GmbH und der US-amerikanischen Gesellschaft QUASM Corporation gegründet. Q-Sensei hat seinen vorübergehenden Sitz in Melbourne, FL und betreibt in Erfurt die Tochterfirma Lalisio."
  12. Raban, D.R.: Self-presentation and the value of information in Q&A websites (2009) 0.06
    0.0617998 = product of:
      0.1235996 = sum of:
        0.1235996 = product of:
          0.2471992 = sum of:
            0.2471992 = weight(_text_:q in 3295) [ClassicSimilarity], result of:
              0.2471992 = score(doc=3295,freq=6.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.7520002 = fieldWeight in 3295, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3295)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Prior research has shown that social interaction is important for continuation of question-and-answer (Q&A) activity online and that it also leads to monetary rewards. The present research focuses on the link between social interaction and the value of information. Expressions of self-presentation in the interaction between askers and answerers online are studied as antecedents for answer feedback which represents the value of the answer to the asker. This relationship is examined in a Q&A site, specifically, in Google Answers (GA). The results of content analysis performed on sets of questions and answers show that both explicit and implicit social cues are used by the site's participants; however, only implicit expressions of self-presentation are related to the provision of social and monetary feedback, ratings, and tips. This finding highlights the importance of implicit cues in textual communication and lends support to the notion of social capital where both monetary and social forms of feedback are the result of interaction online.
  13. Liu, A.; Zou, Q.; Chu, W.W.: Configurable indexing and ranking for XML information retrieval (2004) 0.06
    0.059466876 = product of:
      0.11893375 = sum of:
        0.11893375 = product of:
          0.2378675 = sum of:
            0.2378675 = weight(_text_:q in 4114) [ClassicSimilarity], result of:
              0.2378675 = score(doc=4114,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.7236124 = fieldWeight in 4114, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4114)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Shen, D.; Chen, Z.; Yang, Q.; Zeng, H.J.; Zhang, B.; Lu, Y.; Ma, W.Y.: Web page classification through summarization (2004) 0.06
    0.059466876 = product of:
      0.11893375 = sum of:
        0.11893375 = product of:
          0.2378675 = sum of:
            0.2378675 = weight(_text_:q in 4132) [ClassicSimilarity], result of:
              0.2378675 = score(doc=4132,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.7236124 = fieldWeight in 4132, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4132)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Hickey, T.B.; Toves, J.; O'Neill, E.T.: NACO normalization : a detailed examination of the authority file comparison rules (2006) 0.06
    0.058733746 = sum of:
      0.03493297 = product of:
        0.13973188 = sum of:
          0.13973188 = weight(_text_:authors in 5760) [ClassicSimilarity], result of:
            0.13973188 = score(doc=5760,freq=6.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.61068267 = fieldWeight in 5760, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5760)
        0.25 = coord(1/4)
      0.023800777 = product of:
        0.047601555 = sum of:
          0.047601555 = weight(_text_:22 in 5760) [ClassicSimilarity], result of:
            0.047601555 = score(doc=5760,freq=2.0), product of:
              0.17576122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050191253 = queryNorm
              0.2708308 = fieldWeight in 5760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5760)
        0.5 = coord(1/2)
    
    Abstract
    Normalization rules are essential for interoperability between bibliographic systems. In the process of working with Name Authority Cooperative Program (NACO) authority files to match records with Functional Requirements for Bibliographic Records (FRBR) and developing the Faceted Application of Subject Terminology (FAST) subject heading schema, the authors found inconsistencies in independently created NACO normalization implementations. Investigating these, the authors found ambiguities in the NACO standard that need resolution, and came to conclusions on how the procedure could be simplified with little impact on matching headings. To encourage others to test their software for compliance with the current rules, the authors have established a Web site that has test files and interactive services showing their current implementation.
    Date
    10. 9.2000 17:38:22
  16. Elovici, Y.; Shapira, Y.B.; Kantor, P.B.: ¬A decision theoretic approach to combining information filters : an analytical and empirical evaluation. (2006) 0.05
    0.052323427 = sum of:
      0.02852265 = product of:
        0.1140906 = sum of:
          0.1140906 = weight(_text_:authors in 5267) [ClassicSimilarity], result of:
            0.1140906 = score(doc=5267,freq=4.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.49862027 = fieldWeight in 5267, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5267)
        0.25 = coord(1/4)
      0.023800777 = product of:
        0.047601555 = sum of:
          0.047601555 = weight(_text_:22 in 5267) [ClassicSimilarity], result of:
            0.047601555 = score(doc=5267,freq=2.0), product of:
              0.17576122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050191253 = queryNorm
              0.2708308 = fieldWeight in 5267, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5267)
        0.5 = coord(1/2)
    
    Abstract
    The outputs of several information filtering (IF) systems can be combined to improve filtering performance. In this article the authors propose and explore a framework based on the so-called information structure (IS) model, which is frequently used in Information Economics, for combining the output of multiple IF systems according to each user's preferences (profile). The combination seeks to maximize the expected payoff to that user. The authors show analytically that the proposed framework increases users expected payoff from the combined filtering output for any user preferences. An experiment using the TREC-6 test collection confirms the theoretical findings.
    Date
    22. 7.2006 15:05:39
  17. LeBlanc, J.; Kurth, M.: ¬An operational model for library metadata maintenance (2008) 0.05
    0.046138234 = sum of:
      0.017287336 = product of:
        0.069149345 = sum of:
          0.069149345 = weight(_text_:authors in 101) [ClassicSimilarity], result of:
            0.069149345 = score(doc=101,freq=2.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.30220953 = fieldWeight in 101, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=101)
        0.25 = coord(1/4)
      0.0288509 = product of:
        0.0577018 = sum of:
          0.0577018 = weight(_text_:22 in 101) [ClassicSimilarity], result of:
            0.0577018 = score(doc=101,freq=4.0), product of:
              0.17576122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050191253 = queryNorm
              0.32829654 = fieldWeight in 101, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=101)
        0.5 = coord(1/2)
    
    Abstract
    Libraries pay considerable attention to the creation, preservation, and transformation of descriptive metadata in both MARC and non-MARC formats. Little evidence suggests that they devote as much time, energy, and financial resources to the ongoing maintenance of non-MARC metadata, especially with regard to updating and editing existing descriptive content, as they do to maintenance of such information in the MARC-based online public access catalog. In this paper, the authors introduce a model, derived loosely from J. A. Zachman's framework for information systems architecture, with which libraries can identify and inventory components of catalog or metadata maintenance and plan interdepartmental, even interinstitutional, workflows. The model draws on the notion that the expertise and skills that have long been the hallmark for the maintenance of libraries' catalog data can and should be parlayed towards metadata maintenance in a broader set of information delivery systems.
    Date
    10. 9.2000 17:38:22
    19. 6.2010 19:22:28
  18. Resnick, M.L.; Vaughan, M.W.: Best practices and future visions for search user interfaces (2006) 0.04
    0.04484865 = sum of:
      0.024447985 = product of:
        0.09779194 = sum of:
          0.09779194 = weight(_text_:authors in 5293) [ClassicSimilarity], result of:
            0.09779194 = score(doc=5293,freq=4.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.42738882 = fieldWeight in 5293, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=5293)
        0.25 = coord(1/4)
      0.020400666 = product of:
        0.04080133 = sum of:
          0.04080133 = weight(_text_:22 in 5293) [ClassicSimilarity], result of:
            0.04080133 = score(doc=5293,freq=2.0), product of:
              0.17576122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050191253 = queryNorm
              0.23214069 = fieldWeight in 5293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5293)
        0.5 = coord(1/2)
    
    Abstract
    The authors describe a set of best practices that were developed to assist in the design of search user interfaces. Search user interfaces represent a challenging design domain because novices who have no desire to learn the mechanics of search engine architecture or algorithms often use them. These can lead to frustration and task failure when it is not addressed by the user interface. The best practices are organized into five domains: the corpus, search algorithms, user and task context, the search interface, and mobility. In each section the authors present an introduction to the design challenges related to the domain and a set of best practices for creating a user interface that facilitates effective use by a broad population of users and tasks.
    Date
    22. 7.2006 17:38:51
  19. Camacho-Miñano, M.-del-Mar; Núñez-Nickel, M.: ¬The multilayered nature of reference selection (2009) 0.04
    0.04484865 = sum of:
      0.024447985 = product of:
        0.09779194 = sum of:
          0.09779194 = weight(_text_:authors in 2751) [ClassicSimilarity], result of:
            0.09779194 = score(doc=2751,freq=4.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.42738882 = fieldWeight in 2751, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=2751)
        0.25 = coord(1/4)
      0.020400666 = product of:
        0.04080133 = sum of:
          0.04080133 = weight(_text_:22 in 2751) [ClassicSimilarity], result of:
            0.04080133 = score(doc=2751,freq=2.0), product of:
              0.17576122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050191253 = queryNorm
              0.23214069 = fieldWeight in 2751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2751)
        0.5 = coord(1/2)
    
    Abstract
    Why authors choose some references in preference to others is a question that is still not wholly answered despite its being of interest to scientists. The relevance of references is twofold: They are a mechanism for tracing the evolution of science, and because they enhance the image of the cited authors, citations are a widely known and used indicator of scientific endeavor. Following an extensive review of the literature, we selected all papers that seek to answer the central question and demonstrate that the existing theories are not sufficient: Neither citation nor indicator theory provides a complete and convincing answer. Some perspectives in this arena remain, which are isolated from the core literature. The purpose of this article is to offer a fresh perspective on a 30-year-old problem by extending the context of the discussion. We suggest reviving the discussion about citation theories with a new perspective, that of the readers, by layers or phases, in the final choice of references, allowing for a new classification in which any paper, to date, could be included.
    Date
    22. 3.2009 19:05:07
  20. Kavcic-Colic, A.: Archiving the Web : some legal aspects (2003) 0.04
    0.043969333 = sum of:
      0.020168558 = product of:
        0.08067423 = sum of:
          0.08067423 = weight(_text_:authors in 4754) [ClassicSimilarity], result of:
            0.08067423 = score(doc=4754,freq=2.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.35257778 = fieldWeight in 4754, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4754)
        0.25 = coord(1/4)
      0.023800777 = product of:
        0.047601555 = sum of:
          0.047601555 = weight(_text_:22 in 4754) [ClassicSimilarity], result of:
            0.047601555 = score(doc=4754,freq=2.0), product of:
              0.17576122 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050191253 = queryNorm
              0.2708308 = fieldWeight in 4754, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4754)
        0.5 = coord(1/2)
    
    Abstract
    Technological developments have changed the concepts of publication, reproduction and distribution. However, legislation, and in particular the Legal Deposit Law has not adjusted to these changes - it is very restrictive in the sense of protecting the rights of authors of electronic publications. National libraries and national archival institutions, being aware of their important role in preserving the written and spoken cultural heritage, try to find different legal ways to live up to these responsibilities. This paper presents some legal aspects of archiving Web pages, examines the harvesting of Web pages, provision of public access to pages, and their long-term preservation.
    Date
    10.12.2005 11:22:13

Languages

Types

  • a 1487
  • m 206
  • el 83
  • s 78
  • b 26
  • x 14
  • i 8
  • r 4
  • n 2
  • More… Less…

Themes

Subjects

Classifications