Search (2111 results, page 1 of 106)

  • × year_i:[2000 TO 2010}
  1. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.13
    0.1317809 = product of:
      0.2635618 = sum of:
        0.2635618 = product of:
          0.5271236 = sum of:
            0.20568876 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
              0.20568876 = score(doc=692,freq=2.0), product of:
                0.43917897 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05180212 = queryNorm
                0.46834838 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
            0.32143483 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
              0.32143483 = score(doc=692,freq=2.0), product of:
                0.54901314 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.05180212 = queryNorm
                0.5854775 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(2/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
  2. Kurth, M.; Ruddy, D.; Rupp, N.: Repurposing MARC metadata : using digital project experience to develop a metadata management design (2004) 0.12
    0.123792894 = sum of:
      0.017842166 = product of:
        0.071368665 = sum of:
          0.071368665 = weight(_text_:authors in 4748) [ClassicSimilarity], result of:
            0.071368665 = score(doc=4748,freq=2.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.30220953 = fieldWeight in 4748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=4748)
        0.25 = coord(1/4)
      0.10595073 = sum of:
        0.06383989 = weight(_text_:n in 4748) [ClassicSimilarity], result of:
          0.06383989 = score(doc=4748,freq=2.0), product of:
            0.22335295 = queryWeight, product of:
              4.3116565 = idf(docFreq=1611, maxDocs=44218)
              0.05180212 = queryNorm
            0.28582513 = fieldWeight in 4748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3116565 = idf(docFreq=1611, maxDocs=44218)
              0.046875 = fieldNorm(doc=4748)
        0.042110834 = weight(_text_:22 in 4748) [ClassicSimilarity], result of:
          0.042110834 = score(doc=4748,freq=2.0), product of:
            0.1814022 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.05180212 = queryNorm
            0.23214069 = fieldWeight in 4748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=4748)
    
    Abstract
    Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital content via non-MARC schemes. Metadata processing in these libraries involves defining the relationships between metadata schemes, moving metadata between schemes, and coordinating the intellectual activity and physical resources required to create and manipulate metadata. Actively managing the non-MARC metadata resources used to build digital collections is something most of these libraries have only begun to do. This article proposes strategies for managing MARC metadata repurposing efforts as the first step in a coordinated approach to library metadata management. Guided by lessons learned from Cornell University library mapping and transformation activities, the authors apply the literature of data resource management to library metadata management and propose a model for managing MARC metadata repurposing processes through the implementation of a metadata management design.
    Source
    Library hi tech. 22(2004) no.2, S.144-152
  3. Xie, I.; Cool, C.: Understanding help seeking within the context of searching digital libraries (2009) 0.10
    0.10316075 = sum of:
      0.014868473 = product of:
        0.05947389 = sum of:
          0.05947389 = weight(_text_:authors in 2737) [ClassicSimilarity], result of:
            0.05947389 = score(doc=2737,freq=2.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.25184128 = fieldWeight in 2737, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2737)
        0.25 = coord(1/4)
      0.08829227 = sum of:
        0.053199906 = weight(_text_:n in 2737) [ClassicSimilarity], result of:
          0.053199906 = score(doc=2737,freq=2.0), product of:
            0.22335295 = queryWeight, product of:
              4.3116565 = idf(docFreq=1611, maxDocs=44218)
              0.05180212 = queryNorm
            0.23818761 = fieldWeight in 2737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3116565 = idf(docFreq=1611, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2737)
        0.03509236 = weight(_text_:22 in 2737) [ClassicSimilarity], result of:
          0.03509236 = score(doc=2737,freq=2.0), product of:
            0.1814022 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.05180212 = queryNorm
            0.19345059 = fieldWeight in 2737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2737)
    
    Abstract
    To date, there has been little empirical research investigating the specific types of help-seeking situations that arise when people interact with information in new searching environments such as digital libraries. This article reports the results of a project focusing on the identification of different types of help-seeking situations, along with types of factors that precipitate them among searchers of two different digital libraries. Participants (N = 120) representing the general public in Milwaukee and New York City were selected for this study. Based on the analysis of multiple sources of data, the authors identify 15 types of help-seeking situations among this sample of novice digital library users. These situations are related to the searching activities involved in getting started, identifying relevant digital collections, browsing for information, constructing search statements, refining searches, monitoring searches, and evaluating results. Multiple factors that determine the occurrences of each type of help-seeking situation also are identified. The article concludes with a model that represents user, system, task, and interaction outcome as codeterminates in the formation of help-seeking situations, and presents the theoretical and practical implications of the study results.
    Date
    22. 3.2009 12:49:20
  4. Gödert, W.; Hubrich, J.; Boteram, F.: Thematische Recherche und Interoperabilität : Wege zur Optimierung des Zugriffs auf heterogen erschlossene Dokumente (2009) 0.10
    0.09790489 = sum of:
      0.08035871 = product of:
        0.32143483 = sum of:
          0.32143483 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
            0.32143483 = score(doc=193,freq=2.0), product of:
              0.54901314 = queryWeight, product of:
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.05180212 = queryNorm
              0.5854775 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.25 = coord(1/4)
      0.01754618 = product of:
        0.03509236 = sum of:
          0.03509236 = weight(_text_:22 in 193) [ClassicSimilarity], result of:
            0.03509236 = score(doc=193,freq=2.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.19345059 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.5 = coord(1/2)
    
    Source
    https://opus4.kobv.de/opus4-bib-info/frontdoor/index/index/searchtype/authorsearch/author/%22Hubrich%2C+Jessica%22/docId/703/start/0/rows/20
  5. Egghe, L.; Ravichandra Rao, I.K.: Duality revisited : construction of fractional frequency distributions based on two dual Lotka laws (2002) 0.09
    0.09474343 = sum of:
      0.030903539 = product of:
        0.123614155 = sum of:
          0.123614155 = weight(_text_:authors in 1006) [ClassicSimilarity], result of:
            0.123614155 = score(doc=1006,freq=6.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.52344227 = fieldWeight in 1006, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=1006)
        0.25 = coord(1/4)
      0.06383989 = product of:
        0.12767978 = sum of:
          0.12767978 = weight(_text_:n in 1006) [ClassicSimilarity], result of:
            0.12767978 = score(doc=1006,freq=8.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.57165027 = fieldWeight in 1006, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.046875 = fieldNorm(doc=1006)
        0.5 = coord(1/2)
    
    Abstract
    Fractional frequency distributions of, for example, authors with a certain (fractional) number of papers are very irregular and, therefore, not easy to model or to explain. This article gives a first attempt to this by assuming two simple Lotka laws (with exponent 2): one for the number of authors with n papers (total count here) and one for the number of papers with n authors, n E N. Based an an earlier made convolution model of Egghe, interpreted and reworked now for discrete scores, we are able to produce theoretical fractional frequency distributions with only one parameter, which are in very close agreement with the practical ones as found in a large dataset produced earlier by Rao. The article also shows that (irregular) fractional frequency distributions are a consequence of Lotka's law, and are not examples of breakdowns of this famous historical law.
  6. Chen, Z.; Fu, B.: On the complexity of Rocchio's similarity-based relevance feedback algorithm (2007) 0.09
    0.08618352 = sum of:
      0.021027196 = product of:
        0.084108785 = sum of:
          0.084108785 = weight(_text_:authors in 578) [ClassicSimilarity], result of:
            0.084108785 = score(doc=578,freq=4.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.35615736 = fieldWeight in 578, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=578)
        0.25 = coord(1/4)
      0.06515632 = product of:
        0.13031264 = sum of:
          0.13031264 = weight(_text_:n in 578) [ClassicSimilarity], result of:
            0.13031264 = score(doc=578,freq=12.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.58343816 = fieldWeight in 578, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0390625 = fieldNorm(doc=578)
        0.5 = coord(1/2)
    
    Abstract
    Rocchio's similarity-based relevance feedback algorithm, one of the most important query reformation methods in information retrieval, is essentially an adaptive learning algorithm from examples in searching for documents represented by a linear classifier. Despite its popularity in various applications, there is little rigorous analysis of its learning complexity in literature. In this article, the authors prove for the first time that the learning complexity of Rocchio's algorithm is O(d + d**2(log d + log n)) over the discretized vector space {0, ... , n - 1 }**d when the inner product similarity measure is used. The upper bound on the learning complexity for searching for documents represented by a monotone linear classifier (q, 0) over {0, ... , n - 1 }d can be improved to, at most, 1 + 2k (n - 1) (log d + log(n - 1)), where k is the number of nonzero components in q. Several lower bounds on the learning complexity are also obtained for Rocchio's algorithm. For example, the authors prove that Rocchio's algorithm has a lower bound Omega((d über 2)log n) on its learning complexity over the Boolean vector space {0,1}**d.
  7. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.08276204 = sum of:
      0.061706625 = product of:
        0.2468265 = sum of:
          0.2468265 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2468265 = score(doc=562,freq=2.0), product of:
              0.43917897 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05180212 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.021055417 = product of:
        0.042110834 = sum of:
          0.042110834 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042110834 = score(doc=562,freq=2.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  8. Chau, M.; Lu, Y.; Fang, X.; Yang, C.C.: Characteristics of character usage in Chinese Web searching (2009) 0.08
    0.0827025 = product of:
      0.165405 = sum of:
        0.165405 = sum of:
          0.13031264 = weight(_text_:n in 2456) [ClassicSimilarity], result of:
            0.13031264 = score(doc=2456,freq=12.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.58343816 = fieldWeight in 2456, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2456)
          0.03509236 = weight(_text_:22 in 2456) [ClassicSimilarity], result of:
            0.03509236 = score(doc=2456,freq=2.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.19345059 = fieldWeight in 2456, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2456)
      0.5 = coord(1/2)
    
    Abstract
    The use of non-English Web search engines has been prevalent. Given the popularity of Chinese Web searching and the unique characteristics of Chinese language, it is imperative to conduct studies with focuses on the analysis of Chinese Web search queries. In this paper, we report our research on the character usage of Chinese search logs from a Web search engine in Hong Kong. By examining the distribution of search query terms, we found that users tended to use more diversified terms and that the usage of characters in search queries was quite different from the character usage of general online information in Chinese. After studying the Zipf distribution of n-grams with different values of n, we found that the curve of unigram is the most curved one of all while the bigram curve follows the Zipf distribution best, and that the curves of n-grams with larger n (n = 3-6) had similar structures with ?-values in the range of 0.66-0.86. The distribution of combined n-grams was also studied. All the analyses are performed on the data both before and after the removal of function terms and incomplete terms and similar findings are revealed. We believe the findings from this study have provided some insights into further research in non-English Web searching and will assist in the design of more effective Chinese Web search engines.
    Date
    22.11.2008 17:57:22
  9. Sühl-Strohmenger, W.: "Now or never! Whatever, wherever. .. !?" : Determinanten zukunftsorientierter Informationspraxis in wissenschaftlichen Bibliotheken und die Bedeutung professioneller Informationsarchitekturen (2009) 0.07
    0.07088652 = product of:
      0.14177305 = sum of:
        0.14177305 = sum of:
          0.092144944 = weight(_text_:n in 3052) [ClassicSimilarity], result of:
            0.092144944 = score(doc=3052,freq=6.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.41255307 = fieldWeight in 3052, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3052)
          0.049628094 = weight(_text_:22 in 3052) [ClassicSimilarity], result of:
            0.049628094 = score(doc=3052,freq=4.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.27358043 = fieldWeight in 3052, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3052)
      0.5 = coord(1/2)
    
    Abstract
    Die Informationspraxis von Studierenden und Wissenschaftler(inne)n im Kontext wissenschaftlicher Bibliotheken wandelt sich unter den Vorzeichen der digitalen Informationswelt tiefgreifend, weist jedoch nicht nur in Richtung auf den aktiven Web 2.0-Nutzer bzw. den "Internetnutzer von morgen", wie es bisweilen den Anschein hat. Die durch die komplexen Anforderungen neuer Studiengänge stark beanspruchten Studierenden wie auch die unter hohem Konkurrenz- und Erfolgsdruck forschenden Wissenschaftler(inne)n benötigen vielmehr die Bibliotheken mit ihren professionellen Services und in ihrer Rolle als "Navigatoren im Wissensozean" mehr denn je. Bei ihren Hauptnutzungsgruppen - Studierenden und Wissenschaftler(inne)n - genießen die wissenschaftlichen Bibliotheken gerade wegen ihrer verlässlichen, auf Kontinuität angelegten Kernaufgaben der Beschaffung (auch Lizenzierung) der für Studium und Forschung wesentlichen Medien und Ressourcen, deren professioneller Erschließung und Bereitstellung, sodann wegen der Unterstützung des wissenschaftlichen Publizierens, wegen ihres Angebots bedarfsorientierter (Fach-) Informationsdienste und ihrer auf die konkreten Arbeits- und Lernbedürfnisse abgestimmten Informationsinfrastrukturen nach wie vor hohe Reputation. Dies belegen die Befunde nahezu aller wesentlichen neueren Nutzerstudien im deutschsprachigen Raum wie auch langjährige Erfahrungen im Zusammenhang mit Kursen zur Vermittlung von Informationskompetenz. Der Vortrag thematisiert die Bedeutung bibliothekarisch gestalteter Informationsarchitekturen für professionelle wissenschaftliche Informationsarbeit im Licht des empirisch nachweisbaren Nutzerbedarfs und Informationsverhaltens.
    Date
    22. 8.2009 19:51:28
    23. 8.2009 11:22:11
  10. Rostaing, H.; Barts, N.; Léveillé, V.: Bibliometrics: representation instrument of the multidisciplinary positioning of a scientific area : Implementation for an Advisory Scientific Committee (2007) 0.07
    0.070633814 = product of:
      0.14126763 = sum of:
        0.14126763 = sum of:
          0.08511985 = weight(_text_:n in 1144) [ClassicSimilarity], result of:
            0.08511985 = score(doc=1144,freq=2.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.38110018 = fieldWeight in 1144, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0625 = fieldNorm(doc=1144)
          0.05614778 = weight(_text_:22 in 1144) [ClassicSimilarity], result of:
            0.05614778 = score(doc=1144,freq=2.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.30952093 = fieldWeight in 1144, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1144)
      0.5 = coord(1/2)
    
    Date
    30.12.2007 11:22:39
  11. OWL Web Ontology Language Test Cases (2004) 0.07
    0.070633814 = product of:
      0.14126763 = sum of:
        0.14126763 = sum of:
          0.08511985 = weight(_text_:n in 4685) [ClassicSimilarity], result of:
            0.08511985 = score(doc=4685,freq=2.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.38110018 = fieldWeight in 4685, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
          0.05614778 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
            0.05614778 = score(doc=4685,freq=2.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.30952093 = fieldWeight in 4685, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
      0.5 = coord(1/2)
    
    Date
    14. 8.2011 13:33:22
    Type
    n
  12. Egghe, L.: Empirical and combinatorial study of country occurrences in multi-authored papers (2006) 0.07
    0.06599411 = sum of:
      0.029136134 = product of:
        0.11654454 = sum of:
          0.11654454 = weight(_text_:authors in 81) [ClassicSimilarity], result of:
            0.11654454 = score(doc=81,freq=12.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.49350607 = fieldWeight in 81, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=81)
        0.25 = coord(1/4)
      0.036857978 = product of:
        0.073715955 = sum of:
          0.073715955 = weight(_text_:n in 81) [ClassicSimilarity], result of:
            0.073715955 = score(doc=81,freq=6.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.33004245 = fieldWeight in 81, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.03125 = fieldNorm(doc=81)
        0.5 = coord(1/2)
    
    Abstract
    Papers written by several authors can be classified according to the countries of the author affiliations. The empirical part of this paper consists of two datasets. One dataset consists of 1,035 papers retrieved via the search "pedagog*" in the years 2004 and 2005 (up to October) in Academic Search Elite which is a case where phi(m) = the number of papers with m =1, 2,3 ... authors is decreasing, hence most of the papers have a low number of authors. Here we find that #, m = the number of times a country occurs j times in a m-authored paper, j =1, ..., m-1 is decreasing and that # m, m is much higher than all the other #j, m values. The other dataset consists of 3,271 papers retrieved via the search "enzyme" in the year 2005 (up to October) in the same database which is a case of a non-decreasing phi(m): most papers have 3 or 4 authors and we even find many papers with a much higher number of authors. In this case we show again that # m, m is much higher than the other #j, m values but that #j, m is not decreasing anymore in j =1, ..., m-1, although #1, m is (apart from # m, m) the largest number amongst the #j,m. The combinatorial part gives a proof of the fact that #j,m decreases for j = 1, m-1, supposing that all cases are equally possible. This shows that the first dataset is more conform with this model than the second dataset. Explanations for these findings are given. From the data we also find the (we think: new) distribution of number of papers with n =1, 2,3,... countries (i.e. where there are n different countries involved amongst the m (a n) authors of a paper): a fast decreasing function e.g. as a power law with a very large Lotka exponent.
  13. Cui, H.; Heidorn, P.B.: ¬The reusability of induced knowledge for the automatic semantic markup of taxonomic descriptions (2007) 0.06
    0.06337097 = sum of:
      0.025752952 = product of:
        0.10301181 = sum of:
          0.10301181 = weight(_text_:authors in 84) [ClassicSimilarity], result of:
            0.10301181 = score(doc=84,freq=6.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.43620193 = fieldWeight in 84, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=84)
        0.25 = coord(1/4)
      0.03761802 = product of:
        0.07523604 = sum of:
          0.07523604 = weight(_text_:n in 84) [ClassicSimilarity], result of:
            0.07523604 = score(doc=84,freq=4.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.33684817 = fieldWeight in 84, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0390625 = fieldNorm(doc=84)
        0.5 = coord(1/2)
    
    Abstract
    To automatically convert legacy data of taxonomic descriptions into extensible markup language (XML) format, the authors designed a machine-learning-based approach. In this project three corpora of taxonomic descriptions were selected to prove the hypothesis that domain knowledge and conventions automatically induced from some semistructured corpora (i.e., base corpora) are useful to improve the markup performance of other less-structured, quite different corpora (i.e., evaluation corpora). The "structuredness" of the three corpora was carefully measured. Basing on the structuredness measures, two of the corpora were used as the base corpora and one as the evaluation corpus. Three series of experiments were carried out with the MARTT (markuper of taxonomic treatments) system the authors developed to evaluate the effectiveness of different methods of using the n-gram semantic class association rules, the element relative position probabilities, and a combination of the two types of knowledge mined from the automatically marked-up base corpora. The experimental results showed that the induced knowledge from the base corpora was more reliable than that learned from the training examples alone, and that the n-gram semantic class association rules were effective in improving the markup performance, especially on the elements with sparse training examples. The authors also identify a number of challenges for any automatic markup system using taxonomic descriptions.
  14. Egghe, L.: Relations between the continuous and the discrete Lotka power function (2005) 0.06
    0.06298378 = sum of:
      0.017842166 = product of:
        0.071368665 = sum of:
          0.071368665 = weight(_text_:authors in 3464) [ClassicSimilarity], result of:
            0.071368665 = score(doc=3464,freq=2.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.30220953 = fieldWeight in 3464, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=3464)
        0.25 = coord(1/4)
      0.04514162 = product of:
        0.09028324 = sum of:
          0.09028324 = weight(_text_:n in 3464) [ClassicSimilarity], result of:
            0.09028324 = score(doc=3464,freq=4.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.40421778 = fieldWeight in 3464, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.046875 = fieldNorm(doc=3464)
        0.5 = coord(1/2)
    
    Abstract
    The discrete Lotka power function describes the number of sources (e.g., authors) with n = 1, 2, 3, ... items (e.g., publications). As in econometrics, informetrics theory requires functions of a continuous variable j, replacing the discrete variable n. Now j represents item densities instead of number of items. The continuous Lotka power function describes the density of sources with item density j. The discrete Lotka function one obtains from data, obtained empirically; the continuous Lotka function is the one needed when one wants to apply Lotkaian informetrics, i.e., to determine properties that can be derived from the (continuous) model. It is, hence, important to know the relations between the two models. We show that the exponents of the discrete Lotka function (if not too high, i.e., within limits encountered in practice) and of the continuous Lotka function are approximately the same. This is important to know in applying theoretical results (from the continuous model), derived from practical data.
  15. Polat, H.; Du, W.: Privacy-preserving top-N recommendation on distributed data (2008) 0.06
    0.06298378 = sum of:
      0.017842166 = product of:
        0.071368665 = sum of:
          0.071368665 = weight(_text_:authors in 1864) [ClassicSimilarity], result of:
            0.071368665 = score(doc=1864,freq=2.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.30220953 = fieldWeight in 1864, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=1864)
        0.25 = coord(1/4)
      0.04514162 = product of:
        0.09028324 = sum of:
          0.09028324 = weight(_text_:n in 1864) [ClassicSimilarity], result of:
            0.09028324 = score(doc=1864,freq=4.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.40421778 = fieldWeight in 1864, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.046875 = fieldNorm(doc=1864)
        0.5 = coord(1/2)
    
    Abstract
    Traditional collaborative filtering (CF) systems perform filtering tasks on existing databases; however, data collected for recommendation purposes may split between different online vendors. To generate better predictions, offer richer recommendation services, enhance mutual advantages, and overcome problems caused by inadequate data and/or sparseness, e-companies want to integrate their data. Due to privacy, legal, and financial reasons, however, they do not want to disclose their data to each other. Providing privacy measures is vital to accomplish distributed databased top-N recommendation (TN), while preserving data holders' privacy. In this article, the authors present schemes for binary ratings-based TN on distributed data (horizontally or vertically), and provide accurate referrals without greatly exposing data owners' privacy. Our schemes make it possible for online vendors, even competing companies, to collaborate and conduct TN with privacy, using the joint data while introducing reasonable overhead costs.
  16. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.06
    0.062417716 = sum of:
      0.04113775 = product of:
        0.164551 = sum of:
          0.164551 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
            0.164551 = score(doc=701,freq=2.0), product of:
              0.43917897 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05180212 = queryNorm
              0.3746787 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.25 = coord(1/4)
      0.021279963 = product of:
        0.042559925 = sum of:
          0.042559925 = weight(_text_:n in 701) [ClassicSimilarity], result of:
            0.042559925 = score(doc=701,freq=2.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.19055009 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  17. Price, A.: NOVAGate : a Nordic gateway to electronic resources in the forestry, veterinary and agricultural sciences (2000) 0.06
    0.06180459 = product of:
      0.12360918 = sum of:
        0.12360918 = sum of:
          0.07447987 = weight(_text_:n in 4874) [ClassicSimilarity], result of:
            0.07447987 = score(doc=4874,freq=2.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.33346266 = fieldWeight in 4874, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4874)
          0.049129307 = weight(_text_:22 in 4874) [ClassicSimilarity], result of:
            0.049129307 = score(doc=4874,freq=2.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.2708308 = fieldWeight in 4874, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4874)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:41:00
    Location
    N
  18. Neshat, N.; Horri, A.: ¬A study of subject indexing consistency between the National Library of Iran and Humanities Libraries in the area of Iranian studies (2006) 0.06
    0.06180459 = product of:
      0.12360918 = sum of:
        0.12360918 = sum of:
          0.07447987 = weight(_text_:n in 230) [ClassicSimilarity], result of:
            0.07447987 = score(doc=230,freq=2.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.33346266 = fieldWeight in 230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0546875 = fieldNorm(doc=230)
          0.049129307 = weight(_text_:22 in 230) [ClassicSimilarity], result of:
            0.049129307 = score(doc=230,freq=2.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.2708308 = fieldWeight in 230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=230)
      0.5 = coord(1/2)
    
    Date
    4. 1.2007 10:22:26
  19. Nyseter, T.: Learning centres and knowledge management : based on common ideas? (2005) 0.06
    0.06180459 = product of:
      0.12360918 = sum of:
        0.12360918 = sum of:
          0.07447987 = weight(_text_:n in 3014) [ClassicSimilarity], result of:
            0.07447987 = score(doc=3014,freq=2.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.33346266 = fieldWeight in 3014, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3014)
          0.049129307 = weight(_text_:22 in 3014) [ClassicSimilarity], result of:
            0.049129307 = score(doc=3014,freq=2.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.2708308 = fieldWeight in 3014, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3014)
      0.5 = coord(1/2)
    
    Date
    22. 7.2009 11:55:29
    Location
    N
  20. Dannenberg, R.B.; Birmingham, W.P.; Pardo, B.; Hu, N.; Meek, C.; Tzanetakis, G.: ¬A comparative evaluation of search techniques for query-by-humming using the MUSART testbed (2007) 0.06
    0.060940944 = sum of:
      0.014868473 = product of:
        0.05947389 = sum of:
          0.05947389 = weight(_text_:authors in 269) [ClassicSimilarity], result of:
            0.05947389 = score(doc=269,freq=2.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.25184128 = fieldWeight in 269, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=269)
        0.25 = coord(1/4)
      0.046072472 = product of:
        0.092144944 = sum of:
          0.092144944 = weight(_text_:n in 269) [ClassicSimilarity], result of:
            0.092144944 = score(doc=269,freq=6.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.41255307 = fieldWeight in 269, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0390625 = fieldNorm(doc=269)
        0.5 = coord(1/2)
    
    Abstract
    Query-by-humming systems offer content-based searching for melodies and require no special musical training or knowledge. Many such systems have been built, but there has not been much useful evaluation and comparison in the literature due to the lack of shared databases and queries. The MUSART project testbed allows various search algorithms to be compared using a shared framework that automatically runs experiments and summarizes results. Using this testbed, the authors compared algorithms based on string alignment, melodic contour matching, a hidden Markov model, n-grams, and CubyHum. Retrieval performance is very sensitive to distance functions and the representation of pitch and rhythm, which raises questions about some previously published conclusions. Some algorithms are particularly sensitive to the quality of queries. Our queries, which are taken from human subjects in a realistic setting, are quite difficult, especially for n-gram models. Finally, simulations on query-by-humming performance as a function of database size indicate that retrieval performance falls only slowly as the database size increases.

Languages

Types

  • a 1737
  • m 247
  • el 127
  • s 96
  • b 27
  • n 25
  • x 20
  • i 8
  • r 5
  • p 1
  • More… Less…

Themes

Subjects

Classifications