Search (48 results, page 1 of 3)

  • × theme_ss:"Computerlinguistik"
  • × language_ss:"e"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.17
    0.17410563 = product of:
      0.29017603 = sum of:
        0.06818184 = product of:
          0.20454551 = sum of:
            0.20454551 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20454551 = score(doc=562,freq=2.0), product of:
                0.36394832 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042928502 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.20454551 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.20454551 = score(doc=562,freq=2.0), product of:
            0.36394832 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042928502 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.017448656 = product of:
          0.034897313 = sum of:
            0.034897313 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.034897313 = score(doc=562,freq=2.0), product of:
                0.1503283 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042928502 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.11
    0.10909095 = product of:
      0.27272737 = sum of:
        0.06818184 = product of:
          0.20454551 = sum of:
            0.20454551 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20454551 = score(doc=862,freq=2.0), product of:
                0.36394832 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042928502 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.20454551 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.20454551 = score(doc=862,freq=2.0), product of:
            0.36394832 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042928502 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.4 = coord(2/5)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.088797666 = product of:
      0.22199416 = sum of:
        0.20454551 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.20454551 = score(doc=563,freq=2.0), product of:
            0.36394832 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042928502 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.017448656 = product of:
          0.034897313 = sum of:
            0.034897313 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.034897313 = score(doc=563,freq=2.0), product of:
                0.1503283 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042928502 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Melucci, M.; Orio, N.: Design, implementation, and evaluation of a methodology for automatic stemmer generation (2007) 0.02
    0.021053089 = product of:
      0.10526544 = sum of:
        0.10526544 = weight(_text_:great in 268) [ClassicSimilarity], result of:
          0.10526544 = score(doc=268,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.43548337 = fieldWeight in 268, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=268)
      0.2 = coord(1/5)
    
    Abstract
    The authors describe a statistical approach based on hidden Markov models (HMMs), for generating stemmers automatically. The proposed approach requires little effort to insert new languages in the system even if minimal linguistic knowledge is available. This is a key advantage especially for digital libraries, which are often developed for a specific institution or government because the program can manage a great amount of documents written in local languages. The evaluation described in the article shows that the stemmers implemented by means of HMMs are as effective as those based on linguistic rules.
  5. Peis, E.; Herrera-Viedma, E.; Herrera, J.C.: On the evaluation of XML documents using Fuzzy linguistic techniques (2003) 0.02
    0.018045506 = product of:
      0.09022752 = sum of:
        0.09022752 = weight(_text_:great in 2778) [ClassicSimilarity], result of:
          0.09022752 = score(doc=2778,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.37327147 = fieldWeight in 2778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2778)
      0.2 = coord(1/5)
    
    Abstract
    Recommender systems evaluate and filter the great amount of information available an the Web to assist people in their search processes. A fuzzy evaluation method of XML documents based an computing with words is presented. Given an XML document type (e.g. scientific article), we consider that its elements are not equally informative. This is indicated by the use of a DTD and defining linguistic importance attributes to the more meaningful elements of the DTD designed. Then, the evaluation method generates linguistic recommendations from linguistic evaluation judgements provided by different recommenders an meaningful elements of DTD.
  6. Yannakoudakis, E.J.; Daraki, J.J.: Lexical clustering and retrieval of bibliographic records (1994) 0.02
    0.01698992 = product of:
      0.0849496 = sum of:
        0.0849496 = weight(_text_:business in 1045) [ClassicSimilarity], result of:
          0.0849496 = score(doc=1045,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.39120942 = fieldWeight in 1045, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1045)
      0.2 = coord(1/5)
    
    Abstract
    Presents a new system that enables users to retrieve catalogue entries on the basis of theri lexical similarities and to cluster records in a dynamic fashion. Describes the information retrieval system developed by the Department of Informatics, Athens University of Economics and Business, Greece. The system also offers the means for cyclic retrieval of records from each cluster while allowing the user to define the field to be used in each case. The approach is based on logical keys which are derived from pertinent bibliographic fields and are used for all clustering and information retrieval functions
  7. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.02
    0.015037919 = product of:
      0.0751896 = sum of:
        0.0751896 = weight(_text_:great in 2861) [ClassicSimilarity], result of:
          0.0751896 = score(doc=2861,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.31105953 = fieldWeight in 2861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.2 = coord(1/5)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  8. Rozinajová, V.; Macko, P.: Using natural language to search linked data (2017) 0.02
    0.015037919 = product of:
      0.0751896 = sum of:
        0.0751896 = weight(_text_:great in 3488) [ClassicSimilarity], result of:
          0.0751896 = score(doc=3488,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.31105953 = fieldWeight in 3488, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3488)
      0.2 = coord(1/5)
    
    Abstract
    There are many endeavors aiming to offer users more effective ways of getting relevant information from web. One of them is represented by a concept of Linked Data, which provides interconnected data sources. But querying these types of data is difficult not only for the conventional web users but also for ex-perts in this field. Therefore, a more comfortable way of user query would be of great value. One direction could be to allow the user to use a natural language. To make this task easier we have proposed a method for translating natural language query to SPARQL query. It is based on a sentence structure - utilizing dependen-cies between the words in user queries. Dependencies are used to map the query to the semantic web structure, which is in the next step translated to SPARQL query. According to our first experiments we are able to answer a significant group of user queries.
  9. Ali, C.B.; Haddad, H.; Slimani, Y.: Multi-word terms selection for information retrieval (2022) 0.02
    0.015037919 = product of:
      0.0751896 = sum of:
        0.0751896 = weight(_text_:great in 900) [ClassicSimilarity], result of:
          0.0751896 = score(doc=900,freq=2.0), product of:
            0.24172091 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042928502 = queryNorm
            0.31105953 = fieldWeight in 900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=900)
      0.2 = coord(1/5)
    
    Abstract
    Purpose A number of approaches and algorithms have been proposed over the years as a basis for automatic indexing. Many of these approaches suffer from precision inefficiency at low recall. The choice of indexing units has a great impact on search system effectiveness. The authors dive beyond simple terms indexing to propose a framework for multi-word terms (MWT) filtering and indexing. Design/methodology/approach In this paper, the authors rely on ranking MWT to filter them, keeping the most effective ones for the indexing process. The proposed model is based on filtering MWT according to their ability to capture the document topic and distinguish between different documents from the same collection. The authors rely on the hypothesis that the best MWT are those that achieve the greatest association degree. The experiments are carried out with English and French languages data sets. Findings The results indicate that this approach achieved precision enhancements at low recall, and it performed better than more advanced models based on terms dependencies. Originality/value Using and testing different association measures to select MWT that best describe the documents to enhance the precision in the first retrieved documents.
  10. Jaaranen, K.; Lehtola, A.; Tenni, J.; Bounsaythip, C.: Webtran tools for in-company language support (2000) 0.01
    0.014562788 = product of:
      0.07281394 = sum of:
        0.07281394 = weight(_text_:business in 5553) [ClassicSimilarity], result of:
          0.07281394 = score(doc=5553,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.33532238 = fieldWeight in 5553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.046875 = fieldNorm(doc=5553)
      0.2 = coord(1/5)
    
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
  11. Galitsky, B.: Can many agents answer questions better than one? (2005) 0.01
    0.014562788 = product of:
      0.07281394 = sum of:
        0.07281394 = weight(_text_:business in 3094) [ClassicSimilarity], result of:
          0.07281394 = score(doc=3094,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.33532238 = fieldWeight in 3094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.046875 = fieldNorm(doc=3094)
      0.2 = coord(1/5)
    
    Abstract
    The paper addresses the issue of how online natural language question answering, based on deep semantic analysis, may compete with currently popular keyword search, open domain information retrieval systems, covering a horizontal domain. We suggest the multiagent question answering approach, where each domain is represented by an agent which tries to answer questions taking into account its specific knowledge. The meta-agent controls the cooperation between question answering agents and chooses the most relevant answer(s). We argue that multiagent question answering is optimal in terms of access to business and financial knowledge, flexibility in query phrasing, and efficiency and usability of advice. The knowledge and advice encoded in the system are initially prepared by domain experts. We analyze the commercial application of multiagent question answering and the robustness of the meta-agent. The paper suggests that a multiagent architecture is optimal when a real world question answering domain combines a number of vertical ones to form a horizontal domain.
  12. Vechtomova, O.: ¬A method for automatic extraction of multiword units representing business aspects from user reviews (2014) 0.01
    0.014562788 = product of:
      0.07281394 = sum of:
        0.07281394 = weight(_text_:business in 1304) [ClassicSimilarity], result of:
          0.07281394 = score(doc=1304,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.33532238 = fieldWeight in 1304, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.046875 = fieldNorm(doc=1304)
      0.2 = coord(1/5)
    
  13. Moohebat, M.; Raj, R.G.; Kareem, S.B.A.; Thorleuchter, D.: Identifying ISI-indexed articles by their lexical usage : a text analysis approach (2015) 0.01
    0.014562788 = product of:
      0.07281394 = sum of:
        0.07281394 = weight(_text_:business in 1664) [ClassicSimilarity], result of:
          0.07281394 = score(doc=1664,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.33532238 = fieldWeight in 1664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.046875 = fieldNorm(doc=1664)
      0.2 = coord(1/5)
    
    Abstract
    This research creates an architecture for investigating the existence of probable lexical divergences between articles, categorized as Institute for Scientific Information (ISI) and non-ISI, and consequently, if such a difference is discovered, to propose the best available classification method. Based on a collection of ISI- and non-ISI-indexed articles in the areas of business and computer science, three classification models are trained. A sensitivity analysis is applied to demonstrate the impact of words in different syntactical forms on the classification decision. The results demonstrate that the lexical domains of ISI and non-ISI articles are distinguishable by machine learning techniques. Our findings indicate that the support vector machine identifies ISI-indexed articles in both disciplines with higher precision than do the Naïve Bayesian and K-Nearest Neighbors techniques.
  14. Perovsek, M.; Kranjca, J.; Erjaveca, T.; Cestnika, B.; Lavraca, N.: TextFlows : a visual programming platform for text mining and natural language processing (2016) 0.01
    0.014562788 = product of:
      0.07281394 = sum of:
        0.07281394 = weight(_text_:business in 2697) [ClassicSimilarity], result of:
          0.07281394 = score(doc=2697,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.33532238 = fieldWeight in 2697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.046875 = fieldNorm(doc=2697)
      0.2 = coord(1/5)
    
    Abstract
    Text mining and natural language processing are fast growing areas of research, with numerous applications in business, science and creative industries. This paper presents TextFlows, a web-based text mining and natural language processing platform supporting workflow construction, sharing and execution. The platform enables visual construction of text mining workflows through a web browser, and the execution of the constructed workflows on a processing cloud. This makes TextFlows an adaptable infrastructure for the construction and sharing of text processing workflows, which can be reused in various applications. The paper presents the implemented text mining and language processing modules, and describes some precomposed workflows. Their features are demonstrated on three use cases: comparison of document classifiers and of different part-of-speech taggers on a text categorization problem, and outlier detection in document corpora.
  15. Wright, S.E.: Leveraging terminology resources across application boundaries : accessing resources in future integrated environments (2000) 0.01
    0.012135657 = product of:
      0.06067828 = sum of:
        0.06067828 = weight(_text_:business in 5528) [ClassicSimilarity], result of:
          0.06067828 = score(doc=5528,freq=2.0), product of:
            0.21714608 = queryWeight, product of:
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.042928502 = queryNorm
            0.2794353 = fieldWeight in 5528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0583196 = idf(docFreq=763, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5528)
      0.2 = coord(1/5)
    
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
  16. Warner, A.J.: Natural language processing (1987) 0.01
    0.009305951 = product of:
      0.046529755 = sum of:
        0.046529755 = product of:
          0.09305951 = sum of:
            0.09305951 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.09305951 = score(doc=337,freq=2.0), product of:
                0.1503283 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042928502 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  17. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.01
    0.008142707 = product of:
      0.040713534 = sum of:
        0.040713534 = product of:
          0.08142707 = sum of:
            0.08142707 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.08142707 = score(doc=3164,freq=2.0), product of:
                0.1503283 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042928502 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  18. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.008142707 = product of:
      0.040713534 = sum of:
        0.040713534 = product of:
          0.08142707 = sum of:
            0.08142707 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.08142707 = score(doc=4506,freq=2.0), product of:
                0.1503283 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042928502 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    8.10.2000 11:52:22
  19. Somers, H.: Example-based machine translation : Review article (1999) 0.01
    0.008142707 = product of:
      0.040713534 = sum of:
        0.040713534 = product of:
          0.08142707 = sum of:
            0.08142707 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.08142707 = score(doc=6672,freq=2.0), product of:
                0.1503283 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042928502 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    31. 7.1996 9:22:19
  20. New tools for human translators (1997) 0.01
    0.008142707 = product of:
      0.040713534 = sum of:
        0.040713534 = product of:
          0.08142707 = sum of:
            0.08142707 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.08142707 = score(doc=1179,freq=2.0), product of:
                0.1503283 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042928502 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    31. 7.1996 9:22:19

Years

Types

  • a 40
  • el 4
  • m 2
  • p 2
  • s 2
  • x 1
  • More… Less…