Search (59 results, page 1 of 3)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10123343 = sum of:
      0.08060541 = product of:
        0.24181622 = sum of:
          0.24181622 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24181622 = score(doc=562,freq=2.0), product of:
              0.43026417 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.050750602 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020628018 = product of:
        0.041256037 = sum of:
          0.041256037 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041256037 = score(doc=562,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.08
    0.083886914 = product of:
      0.16777383 = sum of:
        0.16777383 = sum of:
          0.11964179 = weight(_text_:assessment in 5483) [ClassicSimilarity], result of:
            0.11964179 = score(doc=5483,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.4269946 = fieldWeight in 5483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5483)
          0.048132043 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
            0.048132043 = score(doc=5483,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.2708308 = fieldWeight in 5483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5483)
      0.5 = coord(1/2)
    
    Abstract
    This paper gives an outline of the final results of the TransRouter project. In the scope of this project a decision support system for translation managers has been developed, which will support the selection of appropriate routes for translation projects. In this paper emphasis is put on the decision model, which is based on a stepwise refined assessment of translation routes. The workflow of using this system is considered as well
    Date
    10.12.2000 18:22:35
  3. Azpiazu, I.M.; Soledad Pera, M.: Is cross-lingual readability assessment possible? (2020) 0.04
    0.041865904 = product of:
      0.08373181 = sum of:
        0.08373181 = product of:
          0.16746362 = sum of:
            0.16746362 = weight(_text_:assessment in 5868) [ClassicSimilarity], result of:
              0.16746362 = score(doc=5868,freq=12.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.59766793 = fieldWeight in 5868, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Most research efforts related to automatic readability assessment focus on the design of strategies that apply to a specific language. These state-of-the-art strategies are highly dependent on linguistic features that best suit the language for which they were intended, constraining their adaptability and making it difficult to determine whether they would remain effective if they were applied to estimate the level of difficulty of texts in other languages. In this article, we present the results of a study designed to determine the feasibility of a cross-lingual readability assessment strategy. For doing so, we first analyzed the most common features used for readability assessment and determined their influence on the readability prediction process of 6 different languages: English, Spanish, Basque, Italian, French, and Catalan. In addition, we developed a cross-lingual readability assessment strategy that serves as a means to empirically explore the potential advantages of employing a single strategy (and set of features) for readability assessment in different languages, including interlanguage prediction agreement and prediction accuracy improvement for low-resource languages.Friend request acceptance and information disclosure constitute 2 important privacy decisions for users to control the flow of their personal information in social network sites (SNSs). These decisions are greatly influenced by contextual characteristics of the request. However, the contextual influence may not be uniform among users with different levels of privacy concerns. In this study, we hypothesize that users with higher privacy concerns may consider contextual factors differently from those with lower privacy concerns. By conducting a scenario-based survey study and structural equation modeling, we verify the interaction effects between privacy concerns and contextual factors. We additionally find that users' perceived risk towards the requester mediates the effect of context and privacy concerns. These results extend our understanding about the cognitive process behind privacy decision making in SNSs. The interaction effects suggest strategies for SNS providers to predict user's friend request acceptance and to customize context-aware privacy decision support based on users' different privacy attitudes.
  4. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.040302705 = product of:
      0.08060541 = sum of:
        0.08060541 = product of:
          0.24181622 = sum of:
            0.24181622 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24181622 = score(doc=862,freq=2.0), product of:
                0.43026417 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050750602 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  5. Derrington, S.: MT - myth, muddle or reality? (1994) 0.03
    0.034183368 = product of:
      0.068366736 = sum of:
        0.068366736 = product of:
          0.13673347 = sum of:
            0.13673347 = weight(_text_:assessment in 7047) [ClassicSimilarity], result of:
              0.13673347 = score(doc=7047,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.4879938 = fieldWeight in 7047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7047)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The trend away from the development of fully automatic machine translation (FAMT) is the result of failure to develop the foundation level of machine translation (MT) systems design theory. In order to create this level and establish reliably whether FAMT is achievable or not it is necessary to revise the currently accepted view of the interdisciplinary approach. Concludes with an assessment of the interdisciplinary approach as applied to date
  6. Zhai, X.: ChatGPT user experience: : implications for education (2022) 0.03
    0.030214114 = product of:
      0.06042823 = sum of:
        0.06042823 = product of:
          0.12085646 = sum of:
            0.12085646 = weight(_text_:assessment in 849) [ClassicSimilarity], result of:
              0.12085646 = score(doc=849,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.43132967 = fieldWeight in 849, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=849)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    ChatGPT, a general-purpose conversation chatbot released on November 30, 2022, by OpenAI, is expected to impact every aspect of society. However, the potential impacts of this NLP tool on education remain unknown. Such impact can be enormous as the capacity of ChatGPT may drive changes to educational learning goals, learning activities, and assessment and evaluation practices. This study was conducted by piloting ChatGPT to write an academic paper, titled Artificial Intelligence for Education (see Appendix A). The piloting result suggests that ChatGPT is able to help researchers write a paper that is coherent, (partially) accurate, informative, and systematic. The writing is extremely efficient (2-3 hours) and involves very limited professional knowledge from the author. Drawing upon the user experience, I reflect on the potential impacts of ChatGPT, as well as similar AI tools, on education. The paper concludes by suggesting adjusting learning goals-students should be able to use AI tools to conduct subject-domain tasks and education should focus on improving students' creativity and critical thinking rather than general skills. To accomplish the learning goals, researchers should design AI-involved learning tasks to engage students in solving real-world problems. ChatGPT also raises concerns that students may outsource assessment tasks. This paper concludes that new formats of assessments are needed to focus on creativity and critical thinking that AI cannot substitute.
  7. Bowker, L.: Information retrieval in translation memory systems : assessment of current limitations and possibilities for future development (2002) 0.03
    0.029910447 = product of:
      0.059820894 = sum of:
        0.059820894 = product of:
          0.11964179 = sum of:
            0.11964179 = weight(_text_:assessment in 1854) [ClassicSimilarity], result of:
              0.11964179 = score(doc=1854,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.4269946 = fieldWeight in 1854, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1854)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Warner, A.J.: Natural language processing (1987) 0.03
    0.027504025 = product of:
      0.05500805 = sum of:
        0.05500805 = product of:
          0.1100161 = sum of:
            0.1100161 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.1100161 = score(doc=337,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  9. Wright, L.W.; Nardini, H.K.G.; Aronson, A.R.; Rindflesch, T.C.: Hierarchical concept indexing of full-text documents in the Unified Medical Language System Information sources Map (1999) 0.03
    0.025637524 = product of:
      0.05127505 = sum of:
        0.05127505 = product of:
          0.1025501 = sum of:
            0.1025501 = weight(_text_:assessment in 2111) [ClassicSimilarity], result of:
              0.1025501 = score(doc=2111,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.36599535 = fieldWeight in 2111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Full-text documents are a vital and rapidly growing part of online biomedical information. A single large document can contain as much information as a small database, but normally lacks the tight structure and consistent indexing of a database. Retrieval systems will often miss highly relevant parts of a document if the document as a whole appears irrelevant. Access to full-text information is further complicated by the need to search separately many disparate information resources. This research explores how these problems can be addressed by the combined use of 2 techniques: 1) natural language processing for automatic concept-based indexing of full text, and 2) methods for exploiting the structure and hierarchy of full-text documents. We describe methods for applying these techniques to a large collection of full-text documents drawn from the Health Services / Technology Assessment Text (HSTAT) database at the NLM and examine how this hierarchical concept indexing can assist both document- and source-level retrieval in the context of NLM's Information Source Map project
  10. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.: Improving language understanding by Generative Pre-Training 0.03
    0.025637524 = product of:
      0.05127505 = sum of:
        0.05127505 = product of:
          0.1025501 = sum of:
            0.1025501 = weight(_text_:assessment in 870) [ClassicSimilarity], result of:
              0.1025501 = score(doc=870,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.36599535 = fieldWeight in 870, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=870)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).
  11. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.024066022 = product of:
      0.048132043 = sum of:
        0.048132043 = product of:
          0.09626409 = sum of:
            0.09626409 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.09626409 = score(doc=3164,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  12. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.024066022 = product of:
      0.048132043 = sum of:
        0.048132043 = product of:
          0.09626409 = sum of:
            0.09626409 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09626409 = score(doc=4506,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  13. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.024066022 = product of:
      0.048132043 = sum of:
        0.048132043 = product of:
          0.09626409 = sum of:
            0.09626409 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.09626409 = score(doc=6672,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  14. New tools for human translators (1997) 0.02
    0.024066022 = product of:
      0.048132043 = sum of:
        0.048132043 = product of:
          0.09626409 = sum of:
            0.09626409 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.09626409 = score(doc=1179,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  15. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.024066022 = product of:
      0.048132043 = sum of:
        0.048132043 = product of:
          0.09626409 = sum of:
            0.09626409 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09626409 = score(doc=3117,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  16. ¬Der Student aus dem Computer (2023) 0.02
    0.024066022 = product of:
      0.048132043 = sum of:
        0.048132043 = product of:
          0.09626409 = sum of:
            0.09626409 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09626409 = score(doc=1079,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  17. Muneer, I.; Sharjeel, M.; Iqbal, M.; Adeel Nawab, R.M.; Rayson, P.: CLEU - A Cross-language english-urdu corpus and benchmark for text reuse experiments (2019) 0.02
    0.021364605 = product of:
      0.04272921 = sum of:
        0.04272921 = product of:
          0.08545842 = sum of:
            0.08545842 = weight(_text_:assessment in 5299) [ClassicSimilarity], result of:
              0.08545842 = score(doc=5299,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.30499613 = fieldWeight in 5299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5299)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Text reuse is becoming a serious issue in many fields and research shows that it is much harder to detect when it occurs across languages. The recent rise in multi-lingual content on the Web has increased cross-language text reuse to an unprecedented scale. Although researchers have proposed methods to detect it, one major drawback is the unavailability of large-scale gold standard evaluation resources built on real cases. To overcome this problem, we propose a cross-language sentence/passage level text reuse corpus for the English-Urdu language pair. The Cross-Language English-Urdu Corpus (CLEU) has source text in English whereas the derived text is in Urdu. It contains in total 3,235 sentence/passage pairs manually tagged into three categories that is near copy, paraphrased copy, and independently written. Further, as a second contribution, we evaluate the Translation plus Mono-lingual Analysis method using three sets of experiments on the proposed dataset to highlight its usefulness. Evaluation results (f1=0.732 binary, f1=0.552 ternary classification) indicate that it is harder to detect cross-language real cases of text reuse, especially when the language pairs have unrelated scripts. The corpus is a useful benchmark resource for the future development and assessment of cross-language text reuse detection systems for the English-Urdu language pair.
  18. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.020628018 = product of:
      0.041256037 = sum of:
        0.041256037 = product of:
          0.08251207 = sum of:
            0.08251207 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.08251207 = score(doc=4483,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  19. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.020628018 = product of:
      0.041256037 = sum of:
        0.041256037 = product of:
          0.08251207 = sum of:
            0.08251207 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.08251207 = score(doc=4888,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  20. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.020628018 = product of:
      0.041256037 = sum of:
        0.041256037 = product of:
          0.08251207 = sum of:
            0.08251207 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.08251207 = score(doc=5429,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231

Years

Languages

  • e 43
  • d 16

Types

  • a 46
  • el 7
  • m 5
  • p 3
  • s 3
  • x 2
  • d 1
  • More… Less…