Search (202 results, page 10 of 11)

  • × theme_ss:"Computerlinguistik"
  1. Gill, A.J.; Hinrichs-Krapels, S.; Blanke, T.; Grant, J.; Hedges, M.; Tanner, S.: Insight workflow : systematically combining human and computational methods to explore textual data (2017) 0.00
    0.0033317097 = product of:
      0.009995129 = sum of:
        0.009995129 = product of:
          0.029985385 = sum of:
            0.029985385 = weight(_text_:29 in 3682) [ClassicSimilarity], result of:
              0.029985385 = score(doc=3682,freq=2.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19432661 = fieldWeight in 3682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3682)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    16.11.2017 14:00:29
  2. Pepper, S.; Arnaud, P.J.L.: Absolutely PHAB : toward a general model of associative relations (2020) 0.00
    0.0033317097 = product of:
      0.009995129 = sum of:
        0.009995129 = product of:
          0.029985385 = sum of:
            0.029985385 = weight(_text_:29 in 103) [ClassicSimilarity], result of:
              0.029985385 = score(doc=103,freq=2.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19432661 = fieldWeight in 103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=103)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    There have been many attempts at classifying the semantic modification relations (R) of N + N compounds but this work has not led to the acceptance of a definitive scheme, so that devising a reusable classification is a worthwhile aim. The scope of this undertaking is extended to other binominal lexemes, i.e. units that contain two thing-morphemes without explicitly stating R, like prepositional units, N + relational adjective units, etc. The 25-relation taxonomy of Bourque (2014) was tested against over 15,000 binominal lexemes from 106 languages and extended to a 29-relation scheme ("Bourque2") through the introduction of two new reversible relations. Bourque2 is then mapped onto Hatcher's (1960) four-relation scheme (extended by the addition of a fifth relation, similarity , as "Hatcher2"). This results in a two-tier system usable at different degrees of granularities. On account of its semantic proximity to compounding, metonymy is then taken into account, following Janda's (2011) suggestion that it plays a role in word formation; Peirsman and Geeraerts' (2006) inventory of 23 metonymic patterns is mapped onto Bourque2, confirming the identity of metonymic and binominal modification relations. Finally, Blank's (2003) and Koch's (2001) work on lexical semantics justifies the addition to the scheme of a third, superordinate level which comprises the three Aristotelean principles of similarity, contiguity and contrast.
  3. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.00
    0.0033017385 = product of:
      0.009905215 = sum of:
        0.009905215 = product of:
          0.029715646 = sum of:
            0.029715646 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
              0.029715646 = score(doc=734,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19345059 = fieldWeight in 734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=734)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    19. 7.2002 14:22:31
  4. Computational linguistics for the new millennium : divergence or synergy? Proceedings of the International Symposium held at the Ruprecht-Karls Universität Heidelberg, 21-22 July 2000. Festschrift in honour of Peter Hellwig on the occasion of his 60th birthday (2002) 0.00
    0.0033017385 = product of:
      0.009905215 = sum of:
        0.009905215 = product of:
          0.029715646 = sum of:
            0.029715646 = weight(_text_:22 in 4900) [ClassicSimilarity], result of:
              0.029715646 = score(doc=4900,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19345059 = fieldWeight in 4900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4900)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  5. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.00
    0.0033017385 = product of:
      0.009905215 = sum of:
        0.009905215 = product of:
          0.029715646 = sum of:
            0.029715646 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
              0.029715646 = score(doc=190,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19345059 = fieldWeight in 190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    14. 4.2007 10:04:22
  6. Fóris, A.: Network theory and terminology (2013) 0.00
    0.0033017385 = product of:
      0.009905215 = sum of:
        0.009905215 = product of:
          0.029715646 = sum of:
            0.029715646 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.029715646 = score(doc=1365,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    2. 9.2014 21:22:48
  7. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.00
    0.0033017385 = product of:
      0.009905215 = sum of:
        0.009905215 = product of:
          0.029715646 = sum of:
            0.029715646 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.029715646 = score(doc=1171,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    23.11.2023 19:07:22
  8. Jones, I.; Cunliffe, D.; Tudhope, D.: Natural language processing and knowledge organization systems as an aid to retrieval (2004) 0.00
    0.0032982244 = product of:
      0.009894673 = sum of:
        0.009894673 = product of:
          0.029684016 = sum of:
            0.029684016 = weight(_text_:29 in 2677) [ClassicSimilarity], result of:
              0.029684016 = score(doc=2677,freq=4.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.19237353 = fieldWeight in 2677, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2677)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 8.2004 19:29:56
  9. RWI/PH: Auf der Suche nach dem entscheidenden Wort : die Häufung bestimmter Wörter innerhalb eines Textes macht diese zu Schlüsselwörtern (2012) 0.00
    0.0029113963 = product of:
      0.008734189 = sum of:
        0.008734189 = product of:
          0.026202565 = sum of:
            0.026202565 = weight(_text_:k in 331) [ClassicSimilarity], result of:
              0.026202565 = score(doc=331,freq=4.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.16733333 = fieldWeight in 331, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=331)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    "Die Dresdner Wissenschaftler haben die semantischen Eigenschaften von Texten mathematisch untersucht, indem sie zehn verschiedene englische Texte in unterschiedlichen Formen kodierten. Dazu zählt unter anderem die englische Ausgabe von Leo Tolstois "Krieg und Frieden". Beispielsweise übersetzten die Forscher Buchstaben innerhalb eines Textes in eine Binär-Sequenz. Dazu ersetzten sie alle Vokale durch eine Eins und alle Konsonanten durch eine Null. Mit Hilfe weiterer mathematischer Funktionen beleuchteten die Wissenschaftler dabei verschiedene Ebenen des Textes, also sowohl einzelne Vokale, Buchstaben als auch ganze Wörter, die in verschiedenen Formen kodiert wurden. Innerhalb des ganzen Textes lassen sich so wiederkehrende Muster finden. Diesen Zusammenhang innerhalb des Textes bezeichnet man als Langzeitkorrelation. Diese gibt an, ob zwei Buchstaben an beliebig weit voneinander entfernten Textstellen miteinander in Verbindung stehen - beispielsweise gibt es wenn wir an einer Stelle einen Buchstaben "K" finden, eine messbare höhere Wahrscheinlichkeit den Buchstaben "K" einige Seiten später nochmal zu finden. "Es ist zu erwarten, dass wenn es in einem Buch an einer Stelle um Krieg geht, die Wahrscheinlichkeit hoch ist das Wort Krieg auch einige Seiten später zu finden. Überraschend ist es, dass wir die hohe Wahrscheinlichkeit auch auf der Buchstabenebene finden", so Altmann.
  10. Kajanan, S.; Bao, Y.; Datta, A.; VanderMeer, D.; Dutta, K.: Efficient automatic search query formulation using phrase-level analysis (2014) 0.00
    0.0027448907 = product of:
      0.008234672 = sum of:
        0.008234672 = product of:
          0.024704017 = sum of:
            0.024704017 = weight(_text_:k in 1264) [ClassicSimilarity], result of:
              0.024704017 = score(doc=1264,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.15776339 = fieldWeight in 1264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1264)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  11. Rösener, C.: ¬Die Stecknadel im Heuhaufen : Natürlichsprachlicher Zugang zu Volltextdatenbanken (2005) 0.00
    0.0026653677 = product of:
      0.007996103 = sum of:
        0.007996103 = product of:
          0.023988307 = sum of:
            0.023988307 = weight(_text_:29 in 548) [ClassicSimilarity], result of:
              0.023988307 = score(doc=548,freq=2.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.15546128 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=548)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 3.2009 11:11:45
  12. Donath, A.: Nutzungsverbote für ChatGPT (2023) 0.00
    0.0026653677 = product of:
      0.007996103 = sum of:
        0.007996103 = product of:
          0.023988307 = sum of:
            0.023988307 = weight(_text_:29 in 877) [ClassicSimilarity], result of:
              0.023988307 = score(doc=877,freq=2.0), product of:
                0.15430406 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0438652 = queryNorm
                0.15546128 = fieldWeight in 877, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=877)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Milliardenbewertung für ChatGPT OpenAI, das Chatbot ChatGPT betreibt, befindet sich laut einem Bericht des Wall Street Journals in Gesprächen zu einem Aktienverkauf. Das WSJ meldete, der mögliche Verkauf der Aktien würde die Bewertung von OpenAI auf 29 Milliarden US-Dollar anheben. Sorgen auch in Brandenburg Der brandenburgische SPD-Abgeordnete Erik Stohn stellte mit Hilfe von ChatGPT eine Kleine Anfrage an den Brandenburger Landtag, in der er fragte, wie die Landesregierung sicherstelle, dass Studierende bei maschinell erstellten Texten gerecht beurteilt und benotet würden. Er fragte auch nach Maßnahmen, die ergriffen worden seien, um sicherzustellen, dass maschinell erstellte Texte nicht in betrügerischer Weise von Studierenden bei der Bewertung von Studienleistungen verwendet werden könnten.
  13. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.00
    0.0026413908 = product of:
      0.007924172 = sum of:
        0.007924172 = product of:
          0.023772515 = sum of:
            0.023772515 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.023772515 = score(doc=4217,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2018 11:32:44
  14. SIGIR'92 : Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1992) 0.00
    0.0024017796 = product of:
      0.0072053387 = sum of:
        0.0072053387 = product of:
          0.021616016 = sum of:
            0.021616016 = weight(_text_:k in 6671) [ClassicSimilarity], result of:
              0.021616016 = score(doc=6671,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.13804297 = fieldWeight in 6671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=6671)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    HARMAN, D.: Relevance feedback revisited; AALBERSBERG, I.J.: Incremental relevance feedback; TAGUE-SUTCLIFFE, J.: Measuring the informativeness of a retrieval process; LEWIS, D.D.: An evaluation of phrasal and clustered representations on a text categorization task; BLOSSEVILLE, M.J., G. HÉBRAIL, M.G. MONTEIL u. N. PÉNOT: Automatic document classification: natural language processing, statistical analysis, and expert system techniques used together; MASAND, B., G. LINOFF u. D. WALTZ: Classifying news stories using memory based reasoning; KEEN, E.M.: Term position ranking: some new test results; CROUCH, C.J. u. B. YANG: Experiments in automatic statistical thesaurus construction; GREFENSTETTE, G.: Use of syntactic context to produce term association lists for text retrieval; ANICK, P.G. u. R.A. FLYNN: Versioning of full-text information retrieval system; BURKOWSKI, F.J.: Retrieval activities in a database consisting of heterogeneous collections; DEERWESTER, S.C., K. WACLENA u. M. LaMAR: A textual object management system; NIE, J.-Y.:Towards a probabilistic modal logic for semantic-based information retrieval; WANG, A.W., S.K.M. WONG u. Y.Y. YAO: An analysis of vector space models based on computational geometry; BARTELL, B.T., G.W. COTTRELL u. R.K. BELEW: Latent semantic indexing is an optimal special case of multidimensional scaling; GLAVITSCH, U. u. P. SCHÄUBLE: A system for retrieving speech documents; MARGULIS, E.L.: N-Poisson document modelling; HESS, M.: An incrementally extensible document retrieval system based on linguistics and logical principles; COOPER, W.S., F.C. GEY u. D.P. DABNEY: Probabilistic retrieval based on staged logistic regression; FUHR, N.: Integration of probabilistic fact and text retrieval; CROFT, B., L.A. SMITH u. H. TURTLE: A loosely-coupled integration of a text retrieval system and an object-oriented database system; DUMAIS, S.T. u. J. NIELSEN: Automating the assignement of submitted manuscripts to reviewers; GOST, M.A. u. M. MASOTTI: Design of an OPAC database to permit different subject searching accesses; ROBERTSON, A.M. u. P. WILLETT: Searching for historical word forms in a database of 17th century English text using spelling correction methods; FAX, E.A., Q.F. CHEN u. L.S. HEATH: A faster algorithm for constructing minimal perfect hash functions; MOFFAT, A. u. J. ZOBEL: Parameterised compression for sparse bitmaps; GRANDI, F., P. TIBERIO u. P. Zezula: Frame-sliced patitioned parallel signature files; ALLEN, B.: Cognitive differences in end user searching of a CD-ROM index; SONNENWALD, D.H.: Developing a theory to guide the process of designing information retrieval systems; CUTTING, D.R., J.O. PEDERSEN, D. KARGER, u. J.W. TUKEY: Scatter/ Gather: a cluster-based approach to browsing large document collections; CHALMERS, M. u. P. CHITSON: Bead: Explorations in information visualization; WILLIAMSON, C. u. B. SHNEIDERMAN: The dynamic HomeFinder: evaluating dynamic queries in a real-estate information exploring system
  15. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.00
    0.0024017796 = product of:
      0.0072053387 = sum of:
        0.0072053387 = product of:
          0.021616016 = sum of:
            0.021616016 = weight(_text_:k in 5089) [ClassicSimilarity], result of:
              0.021616016 = score(doc=5089,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.13804297 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5089)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
  16. Needham, R.M.; Sparck Jones, K.: Keywords and clumps (1985) 0.00
    0.0024017796 = product of:
      0.0072053387 = sum of:
        0.0072053387 = product of:
          0.021616016 = sum of:
            0.021616016 = weight(_text_:k in 3645) [ClassicSimilarity], result of:
              0.021616016 = score(doc=3645,freq=2.0), product of:
                0.15658903 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0438652 = queryNorm
                0.13804297 = fieldWeight in 3645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3645)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  17. Schürmann, H.: Software scannt Radio- und Fernsehsendungen : Recherche in Nachrichtenarchiven erleichtert (2001) 0.00
    0.0023112171 = product of:
      0.006933651 = sum of:
        0.006933651 = product of:
          0.020800952 = sum of:
            0.020800952 = weight(_text_:22 in 5759) [ClassicSimilarity], result of:
              0.020800952 = score(doc=5759,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.1354154 = fieldWeight in 5759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5759)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Handelsblatt. Nr.79 vom 24.4.2001, S.22
  18. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.00
    0.0023112171 = product of:
      0.006933651 = sum of:
        0.006933651 = product of:
          0.020800952 = sum of:
            0.020800952 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.020800952 = score(doc=1616,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  19. Melzer, C.: ¬Der Maschine anpassen : PC-Spracherkennung - Programme sind mittlerweile alltagsreif (2005) 0.00
    0.0023112171 = product of:
      0.006933651 = sum of:
        0.006933651 = product of:
          0.020800952 = sum of:
            0.020800952 = weight(_text_:22 in 4044) [ClassicSimilarity], result of:
              0.020800952 = score(doc=4044,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.1354154 = fieldWeight in 4044, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4044)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    3. 5.1997 8:44:22
  20. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.00
    0.0023112171 = product of:
      0.006933651 = sum of:
        0.006933651 = product of:
          0.020800952 = sum of:
            0.020800952 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
              0.020800952 = score(doc=3807,freq=2.0), product of:
                0.15360846 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0438652 = queryNorm
                0.1354154 = fieldWeight in 3807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3807)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22

Languages

  • e 138
  • d 56
  • m 2
  • ru 2
  • chi 1
  • f 1
  • More… Less…

Types

  • a 163
  • m 24
  • el 17
  • s 12
  • x 4
  • p 2
  • d 1
  • More… Less…

Subjects

Classifications