Search (194 results, page 1 of 10)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.07109721 = product of:
      0.14219442 = sum of:
        0.14219442 = product of:
          0.42658323 = sum of:
            0.42658323 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.42658323 = score(doc=1826,freq=2.0), product of:
                0.4554123 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05371688 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Landauer, T.K.; Foltz, P.W.; Laham, D.: ¬An introduction to Latent Semantic Analysis (1998) 0.06
    0.059949536 = product of:
      0.11989907 = sum of:
        0.11989907 = product of:
          0.23979814 = sum of:
            0.23979814 = weight(_text_:word in 1162) [ClassicSimilarity], result of:
              0.23979814 = score(doc=1162,freq=12.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.85139966 = fieldWeight in 1162, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1162)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Latent Semantic Analysis (LSA) is a theory and method for extracting and representing the contextual-usage meaning of words by statistical computations applied to a large corpus of text (Landauer and Dumais, 1997). The underlying idea is that the aggregate of all the word contexts in which a given word does and does not appear provides a set of mutual constraints that largely determines the similarity of meaning of words and sets of words to each other. The adequacy of LSA's reflection of human knowledge has been established in a variety of ways. For example, its scores overlap those of humans on standard vocabulary and subject matter tests; it mimics human word sorting and category judgments; it simulates word-word and passage-word lexical priming data; and as reported in 3 following articles in this issue, it accurately estimates passage coherence, learnability of passages by individual students, and the quality and quantity of knowledge contained in an essay.
  3. Rauber, A.: Digital preservation in data-driven science : on the importance of process capture, preservation and validation (2012) 0.06
    0.058228545 = product of:
      0.11645709 = sum of:
        0.11645709 = product of:
          0.34937125 = sum of:
            0.34937125 = weight(_text_:object's in 469) [ClassicSimilarity], result of:
              0.34937125 = score(doc=469,freq=2.0), product of:
                0.53207254 = queryWeight, product of:
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.05371688 = queryNorm
                0.65662336 = fieldWeight in 469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.046875 = fieldNorm(doc=469)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Current digital preservation is strongly biased towards data objects: digital files of document-style objects, or encapsulated and largely self-contained objects. To provide authenticity and provenance information, comprehensive metadata models are deployed to document information on an object's context. Yet, we claim that simply documenting an objects context may not be sufficient to ensure proper provenance and to fulfill the stated preservation goals. Specifically in e-Science and business settings, capturing, documenting and preserving entire processes may be necessary to meet the preservation goals. We thus present an approach for capturing, documenting and preserving processes, and means to assess their authenticity upon re-execution. We will discuss options as well as limitations and open challenges to achieve sound preservation, speci?cally within scientific processes.
  4. Wordhoard (o.J.) 0.06
    0.05710669 = product of:
      0.11421338 = sum of:
        0.11421338 = product of:
          0.22842675 = sum of:
            0.22842675 = weight(_text_:word in 3922) [ClassicSimilarity], result of:
              0.22842675 = score(doc=3922,freq=8.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.81102574 = fieldWeight in 3922, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3922)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    WordHoard defines a multiword unit as a special type of collocate in which the component words comprise a meaningful phrase. For example, "Knight of the Round Table" is a meaningful multiword unit or phrase. WordHoard uses the notion of a pseudo-bigram to generalize the computation of bigram (two word) statistical measures to phrases (n-grams) longer than two words, and to allow comparisons of these measures for phrases with different word counts. WordHoard applies the localmaxs algorithm of Silva et al. to the pseudo-bigrams to identify potential compositional phrases that "stand out" in a text. WordHoard can also filter two and three word phrases using the word class filters suggested by Justeson and Katz.
  5. WordHoard: finding multiword units (20??) 0.06
    0.05710669 = product of:
      0.11421338 = sum of:
        0.11421338 = product of:
          0.22842675 = sum of:
            0.22842675 = weight(_text_:word in 1123) [ClassicSimilarity], result of:
              0.22842675 = score(doc=1123,freq=8.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.81102574 = fieldWeight in 1123, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1123)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    WordHoard defines a multiword unit as a special type of collocate in which the component words comprise a meaningful phrase. For example, "Knight of the Round Table" is a meaningful multiword unit or phrase. WordHoard uses the notion of a pseudo-bigram to generalize the computation of bigram (two word) statistical measures to phrases (n-grams) longer than two words, and to allow comparisons of these measures for phrases with different word counts. WordHoard applies the localmaxs algorithm of Silva et al. to the pseudo-bigrams to identify potential compositional phrases that "stand out" in a text. WordHoard can also filter two and three word phrases using the word class filters suggested by Justeson and Katz.
  6. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.06
    0.056877762 = product of:
      0.113755524 = sum of:
        0.113755524 = product of:
          0.34126657 = sum of:
            0.34126657 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.34126657 = score(doc=230,freq=2.0), product of:
                0.4554123 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05371688 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  7. Eversberg, B.: Allegro-Fortbildung 2015 (2015) 0.05
    0.04894859 = product of:
      0.09789718 = sum of:
        0.09789718 = product of:
          0.19579436 = sum of:
            0.19579436 = weight(_text_:word in 1123) [ClassicSimilarity], result of:
              0.19579436 = score(doc=1123,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.6951649 = fieldWeight in 1123, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1123)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Auch als Word-Ausgabe unter: http://www.allegro-c.de/fb/fb15.docx.
  8. Borchers, D.: ¬Eine kleine Geschichte der Textverarbeitung (2019) 0.05
    0.04894859 = product of:
      0.09789718 = sum of:
        0.09789718 = product of:
          0.19579436 = sum of:
            0.19579436 = weight(_text_:word in 5422) [ClassicSimilarity], result of:
              0.19579436 = score(doc=5422,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.6951649 = fieldWeight in 5422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5422)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Vor fast 70 Jahren begann die Ablösung der Schreibmaschine durch den ersten Word Processor. Den Begriff dachte sich ein ehemaliger Jagdflieger aus.
  9. Baker, T.: ¬A grammar of Dublin Core (2000) 0.05
    0.047188185 = product of:
      0.09437637 = sum of:
        0.09437637 = sum of:
          0.06526479 = weight(_text_:word in 1236) [ClassicSimilarity], result of:
            0.06526479 = score(doc=1236,freq=2.0), product of:
              0.28165168 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.05371688 = queryNorm
              0.23172164 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
          0.029111583 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
            0.029111583 = score(doc=1236,freq=2.0), product of:
              0.18810736 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05371688 = queryNorm
              0.15476047 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
      0.5 = coord(1/2)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  10. Kiela, D.; Clark, S.: Detecting compositionality of multi-word expressions using nearest neighbours in vector space models (2013) 0.05
    0.046149176 = product of:
      0.09229835 = sum of:
        0.09229835 = product of:
          0.1845967 = sum of:
            0.1845967 = weight(_text_:word in 1161) [ClassicSimilarity], result of:
              0.1845967 = score(doc=1161,freq=4.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.6554078 = fieldWeight in 1161, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1161)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present a novel unsupervised approach to detecting the compositionality of multi-word expressions. We compute the compositionality of a phrase through substituting the constituent words with their "neighbours" in a semantic vector space and averaging over the distance between the original phrase and the substituted neighbour phrases. Several methods of obtaining neighbours are presented. The results are compared to existing supervised results and achieve state-of-the-art performance on a verb-object dataset of human compositionality ratings.
  11. Snajder, J.: Distributional semantics of multi-word expressions (2013) 0.04
    0.04079049 = product of:
      0.08158098 = sum of:
        0.08158098 = product of:
          0.16316196 = sum of:
            0.16316196 = weight(_text_:word in 2868) [ClassicSimilarity], result of:
              0.16316196 = score(doc=2868,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.5793041 = fieldWeight in 2868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.04
    0.035548605 = product of:
      0.07109721 = sum of:
        0.07109721 = product of:
          0.21329162 = sum of:
            0.21329162 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.21329162 = score(doc=4388,freq=2.0), product of:
                0.4554123 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05371688 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  13. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.04
    0.035548605 = product of:
      0.07109721 = sum of:
        0.07109721 = product of:
          0.21329162 = sum of:
            0.21329162 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.21329162 = score(doc=5669,freq=2.0), product of:
                0.4554123 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05371688 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  14. Seadle, M.: Spoken words, unspoken meanings : a DLI2 project ethnography (2000) 0.04
    0.035325605 = product of:
      0.07065121 = sum of:
        0.07065121 = product of:
          0.14130242 = sum of:
            0.14130242 = weight(_text_:word in 1235) [ClassicSimilarity], result of:
              0.14130242 = score(doc=1235,freq=6.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.5016921 = fieldWeight in 1235, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1235)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The National Gallery of the Spoken Word (NGSW) is a Digital Library Initiative-funded project based at Michigan State University with multiple internal and external partners. The NGSW is essentially a multicultural enterprise because of the variety of disciplines involved, each of which has a unique micro-culture and mutually-unintelligible specialized language. This article uses an ethnographic approach to describe three NGSW-based research projects: copyright, metadata, and digital preservation. Each of these projects shows some aspect of language-related infrastructure development. The NGSW's partners come from four different units on the Michigan State University campus: the College of Engineering, the College of Education, Matrix (a technology research center in the College of Arts and Letters), and the University Library. External partners include the University of Colorado (Boulder), Northwestern University, and the Chicago Historical Society. The original official letter-of-intent proposed five key points: 1. Founding a National Gallery of the Spoken Word analogous to the National Portrait Gallery for publicly available materials.. 2. Enriching the Gallery with a repository for oral history and other scholarly interview materials.. 3. Developing a practical, widely usable search engine for voice resources.. 4. Developing speech digitization standards.. 5. Testing the Gallery's utility in classroom settings..
    Object
    National Gallery of the Spoken Word
  15. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.03
    0.03461188 = product of:
      0.06922376 = sum of:
        0.06922376 = product of:
          0.13844752 = sum of:
            0.13844752 = weight(_text_:word in 3884) [ClassicSimilarity], result of:
              0.13844752 = score(doc=3884,freq=4.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.49155584 = fieldWeight in 3884, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3884)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
  16. McGraw-Hill Multimedia Encyclopedia of Science & Technology (1996) 0.03
    0.032632396 = product of:
      0.06526479 = sum of:
        0.06526479 = product of:
          0.13052958 = sum of:
            0.13052958 = weight(_text_:word in 4715) [ClassicSimilarity], result of:
              0.13052958 = score(doc=4715,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.46344328 = fieldWeight in 4715, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4715)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    7.300 articles - 122.600 clear, concise definitions - 550 high resolution color graphics, interactive maps, charts and tables - 39 superb animation sequences - 45 minutes of clear audio examples - Key word, Boolean and context relevant searching
  17. Powell, J.; Fox, E.A.: Multilingual federated searching across heterogeneous collections (1998) 0.03
    0.032632396 = product of:
      0.06526479 = sum of:
        0.06526479 = product of:
          0.13052958 = sum of:
            0.13052958 = weight(_text_:word in 1250) [ClassicSimilarity], result of:
              0.13052958 = score(doc=1250,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.46344328 = fieldWeight in 1250, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1250)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article describes a scalable system for searching heterogeneous multilingual collections on the World Wide Web. It details a markup language for describing the characteristics of a search engine and its interface, and a protocol for requesting word translations between languages.
  18. Gödert, W.: Detecting multiword phrases in mathematical text corpora (2012) 0.03
    0.032632396 = product of:
      0.06526479 = sum of:
        0.06526479 = product of:
          0.13052958 = sum of:
            0.13052958 = weight(_text_:word in 466) [ClassicSimilarity], result of:
              0.13052958 = score(doc=466,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.46344328 = fieldWeight in 466, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=466)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present an approach for detecting multiword phrases in mathematical text corpora. The method used is based on characteristic features of mathematical terminology. It makes use of a software tool named Lingo which allows to identify words by means of previously defined dictionaries for specific word classes as adjectives, personal names or nouns. The detection of multiword groups is done algorithmically. Possible advantages of the method for indexing and information retrieval and conclusions for applying dictionary-based methods of automatic indexing instead of stemming procedures are discussed.
  19. Böhner, D.; Stöber, T.; Teichert, A.; Lemke, D.; Tietze, K.; Helfer, M.; Frauenrath, P.; Podschull, S.: Literaturverwaltungsprogramme im Vergleich (2016) 0.03
    0.032632396 = product of:
      0.06526479 = sum of:
        0.06526479 = product of:
          0.13052958 = sum of:
            0.13052958 = weight(_text_:word in 3000) [ClassicSimilarity], result of:
              0.13052958 = score(doc=3000,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.46344328 = fieldWeight in 3000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3000)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Der ursprünglich von Kollegen der UB Augsburg zusammengestellte Vergleich wurde nun schon zum 8. Mal aktualisiert (Stand: Juni 2020) und bietet einen Überblick über die verschiedenen Funktionen, die Bedienung und Lizenz-/Preismodelle von Literaturverwaltungsprogrammen. Folgende Anwendungen werden betrachtet: Bibsonomy, Citavi, EndNote, JabRef, Mendeley, Papers, Literaturverwaltung in MS Word und Zotero. Die jeweils aktuelle Version des Vergleiches finden Sie unter folgendem Link: https://mediatum.ub.tum.de/node?id=1127579. Vgl. Mail von Dorothea Lemke an Inetbib vom 06.07.2020.
  20. Information als Rohstoff für Innovation : Programm der Bundesregierung 1996-2000 (1996) 0.03
    0.029111583 = product of:
      0.058223166 = sum of:
        0.058223166 = product of:
          0.11644633 = sum of:
            0.11644633 = weight(_text_:22 in 5449) [ClassicSimilarity], result of:
              0.11644633 = score(doc=5449,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.61904186 = fieldWeight in 5449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5449)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.1997 19:26:34

Years

Languages

  • e 98
  • d 89
  • el 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 91
  • i 11
  • m 5
  • s 3
  • x 3
  • b 2
  • r 2
  • n 1
  • More… Less…