Search (23 results, page 1 of 2)

  • × language_ss:"d"
  • × theme_ss:"Computerlinguistik"
  1. Melzer, C.: ¬Der Maschine anpassen : PC-Spracherkennung - Programme sind mittlerweile alltagsreif (2005) 0.02
    0.015459906 = product of:
      0.10821934 = sum of:
        0.10110034 = weight(_text_:henry in 4044) [ClassicSimilarity], result of:
          0.10110034 = score(doc=4044,freq=4.0), product of:
            0.23560001 = queryWeight, product of:
              7.84674 = idf(docFreq=46, maxDocs=44218)
              0.03002521 = queryNorm
            0.42911857 = fieldWeight in 4044, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.84674 = idf(docFreq=46, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4044)
        0.0071190023 = product of:
          0.014238005 = sum of:
            0.014238005 = weight(_text_:22 in 4044) [ClassicSimilarity], result of:
              0.014238005 = score(doc=4044,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.1354154 = fieldWeight in 4044, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4044)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Content
    Billiger geht es mit "Via Voice Standard" von IBM. Die Software kostet etwa 50 Euro, hat aber erhebliche Schwächen in der Lernfähigkeit: Sie schneidet jedoch immer noch besser ab als das gut drei Mal so teure "Voice Office Premium 10"; das im Test der sechs Programme als einziges nur ein "Befriedigend" bekam. "Man liest über Spracherkennung nicht mehr so viel" weil es funktioniert", glaubt Dorothee Wiegand von der in Hannover erscheinenden Computerzeitschrift "c't". Die Technik" etwa "Dragon Naturally Speaking" von ScanSoft, sei ausgereift, "Spracherkennung ist vor allem Statistik, die Auswertung unendlicher Wortmöglichkeiten. Eigentlich war eher die Hardware das Problem", sagt Wiegand. Da jetzt selbst einfache Heimcomputer schnell und leistungsfähig seien, hätten die Entwickler viel mehr Möglichkeiten."Aber selbst ältere Computer kommen mit den Systemen klar. Sie brauchen nur etwas länger! "Jedes Byte macht die Spracherkennung etwas schneller, ungenauer ist sie sonst aber nicht", bestätigt Kristina Henry von linguatec in München. Auch für die Produkte des Herstellers gelte jedoch, dass "üben und deutlich sprechen wichtiger sind als jede Hardware". Selbst Stimmen von Diktiergeräten würden klar, erkannt, versichert Henry: "Wir wollen einen Schritt weiter gehen und das Diktieren von unterwegs möglich machen." Der Benutzer könnte dann eine Nummer anwählen, etwa im Auto einen Text aufsprechen und ihn zu Hause "getippt" vorfinden. Grundsätzlich passt die Spracherkennungssoftware inzwischen auch auf den privaten Computer. Klar ist aber, dass selbst der bestgesprochene Text nachbearbeitet werden muss. Zudem ist vom Nutzer Geduld gefragt: Ebenso wie sein System lernt, muss der Mensch sich in Aussprache und Geschwindigkeit dem System anpassen. Dann sind die Ergebnisse allerdings beachtlich - und "Sexterminvereinbarung" statt "zwecks Terminvereinbarung" gehört der Vergangenheit an."
    Date
    3. 5.1997 8:44:22
  2. Ruge, G.: Sprache und Computer : Wortbedeutung und Termassoziation. Methoden zur automatischen semantischen Klassifikation (1995) 0.02
    0.015022537 = product of:
      0.07010517 = sum of:
        0.026916584 = weight(_text_:classification in 1534) [ClassicSimilarity], result of:
          0.026916584 = score(doc=1534,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.28149095 = fieldWeight in 1534, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=1534)
        0.026916584 = weight(_text_:classification in 1534) [ClassicSimilarity], result of:
          0.026916584 = score(doc=1534,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.28149095 = fieldWeight in 1534, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=1534)
        0.016272005 = product of:
          0.03254401 = sum of:
            0.03254401 = weight(_text_:22 in 1534) [ClassicSimilarity], result of:
              0.03254401 = score(doc=1534,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.30952093 = fieldWeight in 1534, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1534)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Content
    Enthält folgende Kapitel: (1) Motivation; (2) Language philosophical foundations; (3) Structural comparison of extensions; (4) Earlier approaches towards term association; (5) Experiments; (6) Spreading-activation networks or memory models; (7) Perspective. Appendices: Heads and modifiers of 'car'. Glossary. Index. Language and computer. Word semantics and term association. Methods towards an automatic semantic classification
    Footnote
    Rez. in: Knowledge organization 22(1995) no.3/4, S.182-184 (M.T. Rolland)
  3. Dietze, J.; Völkel, H.: Verifikation einer Methode der lexikalischen Semantik : zur computergestützten Bestimmung der semantischen Konsistenz und des semantischen Abstands (1992) 0.01
    0.007690453 = product of:
      0.053833168 = sum of:
        0.026916584 = weight(_text_:classification in 6680) [ClassicSimilarity], result of:
          0.026916584 = score(doc=6680,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.28149095 = fieldWeight in 6680, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=6680)
        0.026916584 = weight(_text_:classification in 6680) [ClassicSimilarity], result of:
          0.026916584 = score(doc=6680,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.28149095 = fieldWeight in 6680, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=6680)
      0.14285715 = coord(2/14)
    
    Abstract
    Uses a semantic field 'linguistic communication' of 735 verbs to verify two numerically based methods working with the semic cooccurrence interval due to the semic micro-structure of a lexeme. The weak point of this procedure is the one-stage classification of the semantic features (semes) of the field
  4. Stock, W.G.: Textwortmethode : Norbert Henrichs zum 65. (3) (2000) 0.01
    0.007690453 = product of:
      0.053833168 = sum of:
        0.026916584 = weight(_text_:classification in 4891) [ClassicSimilarity], result of:
          0.026916584 = score(doc=4891,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.28149095 = fieldWeight in 4891, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=4891)
        0.026916584 = weight(_text_:classification in 4891) [ClassicSimilarity], result of:
          0.026916584 = score(doc=4891,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.28149095 = fieldWeight in 4891, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=4891)
      0.14285715 = coord(2/14)
    
    Abstract
    Nur wenige Dokumentationsmethoden werden mit dem Namen ihrer Entwickler assoziiert. Ausnahmen sind Melvil Dewey (DDC), S.R. Ranganathan (Colon Classification) - und Norbert Henrichs. Seine Textwortmethode ermöglicht die Indexierung und das Retrieval von Literatur aus Fachgebieten, die keine allseits akzeptierte Fachterminologie vorweisen, also viele Sozial- und Geisteswissenschaften, vorneweg die Philosophie. Für den Einsatz in der elektronischen Philosophie-Dokumentation hat Henrichs in den späten sechziger Jahren die Textwortmethode entworfen. Er ist damit nicht nur einer der Pioniere der Anwendung der elektronischen Datenverarbeitung in der Informationspraxis, sondern auch der Pionier bei der Dokumentation terminologisch nicht starrer Fachsprachen
  5. Karlova-Bourbonus, N.: Automatic detection of contradictions in texts (2018) 0.01
    0.0055192295 = product of:
      0.038634606 = sum of:
        0.012730695 = weight(_text_:subject in 5976) [ClassicSimilarity], result of:
          0.012730695 = score(doc=5976,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.11854853 = fieldWeight in 5976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0234375 = fieldNorm(doc=5976)
        0.025903909 = product of:
          0.051807817 = sum of:
            0.051807817 = weight(_text_:texts in 5976) [ClassicSimilarity], result of:
              0.051807817 = score(doc=5976,freq=6.0), product of:
                0.16460659 = queryWeight, product of:
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03002521 = queryNorm
                0.3147372 = fieldWeight in 5976, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=5976)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    Implicit contradictions will only partially be the subject of the present study, aiming primarily at identifying the realization mechanism and cues (Chapter 5) as well as finding the parts of contradictions by applying the state of the art algorithms for natural language processing without conducting deep meaning processing. Further in focus are the explicit and implicit contradictions that can be detected by means of explicit linguistic, structural, lexical cues, and by conducting some additional processing operations (e.g., counting the sum in order to detect contradictions arising from numerical divergencies). One should note that an additional complexity in finding contradictions can arise in case parts of the contradictions occur on different levels of realization. Thus, a contradiction can be observed on the word- and phrase-level, such as in a married bachelor (for variations of contradictions on lexical level, see Ganeev 2004), on the sentence level - between parts of a sentence or between two or more sentences, or on the text level - between the portions of a text or between the whole texts such as a contradiction between the Bible and the Quran, for example. Only contradictions arising at the level of single sentences occurring in one or more texts, as well as parts of a sentence, will be considered for the purpose of this study. Though the focus of interest will be on single sentences, it will make use of text particularities such as coreference resolution without establishing the referents in the real world. Finally, another aspect to be considered is that parts of the contradictions are not neces-sarily to appear at the same time. They can be separated by many years and centuries with or without time expression making their recognition by human and detection by machine challenging. According to Aristotle's ontological version of the LNC (Section 3.1.1), how-ever, the same time reference is required in order for two statements to be judged as a contradiction. Taking this into account, we set the borders for the study by limiting the ana-lyzed textual data thematically (only nine world events) and temporally (three days after the reported event had happened) (Section 5.1). No sophisticated time processing will thus be conducted.
  6. Altmann, E.G.; Cristadoro, G.; Esposti, M.D.: On the origin of long-range correlations in texts (2012) 0.00
    0.0030214933 = product of:
      0.042300906 = sum of:
        0.042300906 = product of:
          0.08460181 = sum of:
            0.08460181 = weight(_text_:texts in 330) [ClassicSimilarity], result of:
              0.08460181 = score(doc=330,freq=4.0), product of:
                0.16460659 = queryWeight, product of:
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03002521 = queryNorm
                0.5139637 = fieldWeight in 330, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.046875 = fieldNorm(doc=330)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    The complexity of human interactions with social and natural phenomena is mirrored in the way we describe our experiences through natural language. In order to retain and convey such a high dimensional information, the statistical properties of our linguistic output has to be highly correlated in time. An example are the robust observations, still largely not understood, of correlations on arbitrary long scales in literary texts. In this paper we explain how long-range correlations flow from highly structured linguistic levels down to the building blocks of a text (words, letters, etc..). By combining calculations and data analysis we show that correlations take form of a bursty sequence of events once we approach the semantically relevant topics of the text. The mechanisms we identify are fairly general and can be equally applied to other hierarchical settings.
  7. Sprachtechnologie, mobile Kommunikation und linguistische Ressourcen : Beiträge zur GLDV Tagung 2005 in Bonn (2005) 0.00
    0.00288392 = product of:
      0.02018744 = sum of:
        0.01009372 = weight(_text_:classification in 3578) [ClassicSimilarity], result of:
          0.01009372 = score(doc=3578,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.10555911 = fieldWeight in 3578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
        0.01009372 = weight(_text_:classification in 3578) [ClassicSimilarity], result of:
          0.01009372 = score(doc=3578,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.10555911 = fieldWeight in 3578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
      0.14285715 = coord(2/14)
    
    Content
    Darja Mönke: Ein Parser für natürlichsprachlich formulierte mathematische Beweise - Martin Müller: Ontologien für mathematische Beweistexte - Moritz Neugebauer: The status of functional phonological classification in statistical speech recognition - Uwe Quasthoff: Kookkurrenzanalyse und korpusbasierte Sachgruppenlexikographie - Reinhard Rapp: On the Relationship between Word Frequency and Word Familiarity - Ulrich Schade/Miloslaw Frey/Sebastian Becker: Computerlinguistische Anwendungen zur Verbesserung der Kommunikation zwischen militärischen Einheiten und deren Führungsinformationssystemen - David Schlangen/Thomas Hanneforth/Manfred Stede: Weaving the Semantic Web: Extracting and Representing the Content of Pathology Reports - Thomas Schmidt: Modellbildung und Modellierungsparadigmen in der computergestützten Korpuslinguistik - Sabine Schröder/Martina Ziefle: Semantic transparency of cellular phone menus - Thorsten Trippel/Thierry Declerck/Ulrich Held: Standardisierung von Sprachressourcen: Der aktuelle Stand - Charlotte Wollermann: Evaluation der audiovisuellen Kongruenz bei der multimodalen Sprachsynsthese - Claudia Kunze/Lothar Lemnitzer: Anwendungen des GermaNet II: Einleitung - Claudia Kunze/Lothar Lemnitzer: Die Zukunft der Wortnetze oder die Wortnetze der Zukunft - ein Roadmap-Beitrag -
  8. ¬Der Student aus dem Computer (2023) 0.00
    0.0020340008 = product of:
      0.02847601 = sum of:
        0.02847601 = product of:
          0.05695202 = sum of:
            0.05695202 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.05695202 = score(doc=1079,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    27. 1.2023 16:22:55
  9. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.00
    0.0017434291 = product of:
      0.024408007 = sum of:
        0.024408007 = product of:
          0.048816014 = sum of:
            0.048816014 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.048816014 = score(doc=5429,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Source
    c't. 2000, H.22, S.230-231
  10. Franke-Maier, M.: Computerlinguistik und Bibliotheken : Editorial (2016) 0.00
    0.0015155592 = product of:
      0.021217827 = sum of:
        0.021217827 = weight(_text_:subject in 3206) [ClassicSimilarity], result of:
          0.021217827 = score(doc=3206,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.19758089 = fieldWeight in 3206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3206)
      0.071428575 = coord(1/14)
    
    Abstract
    Vor 50 Jahren, im Februar 1966, wies Floyd M. Cammack auf den Zusammenhang von "Linguistics and Libraries" hin. Er ging dabei von dem Eintrag für "Linguistics" in den Library of Congress Subject Headings (LCSH) von 1957 aus, der als Verweis "See Language and Languages; Philology; Philology, Comparative" enthielt. Acht Jahre später kamen unter dem Schlagwort "Language and Languages" Ergänzungen wie "language data processing", "automatic indexing", "machine translation" und "psycholinguistics" hinzu. Für Cammack zeigt sich hier ein Netz komplexer Wechselbeziehungen, die unter dem Begriff "Linguistics" zusammengefasst werden sollten. Dieses System habe wichtigen Einfluss auf alle, die mit dem Sammeln, Organisieren, Speichern und Wiederauffinden von Informationen befasst seien. (Cammack 1966:73). Hier liegt - im übertragenen Sinne - ein Heft vor Ihnen, in dem es um computerlinguistische Verfahren in Bibliotheken geht. Letztlich geht es um eine Versachlichung der Diskussion, um den Stellenwert der Inhaltserschliessung und die Rekalibrierung ihrer Wertschätzung in Zeiten von Mega-Indizes und Big Data. Der derzeitige Widerspruch zwischen dem Wunsch nach relevanter Treffermenge in Rechercheoberflächen vs. der Erfahrung des Relevanz-Rankings ist zu lösen. Explizit auch die Frage, wie oft wir von letzterem enttäuscht wurden und was zu tun ist, um das Verhältnis von recall und precision wieder in ein angebrachtes Gleichgewicht zu bringen. Unsere Nutzerinnen und Nutzer werden es uns danken.
  11. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.00
    0.0014528577 = product of:
      0.020340007 = sum of:
        0.020340007 = product of:
          0.040680014 = sum of:
            0.040680014 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.040680014 = score(doc=5428,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Source
    c't. 2000, H.22, S.220-229
  12. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.00
    0.0014528577 = product of:
      0.020340007 = sum of:
        0.020340007 = product of:
          0.040680014 = sum of:
            0.040680014 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
              0.040680014 = score(doc=1693,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.38690117 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1693)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 3.2015 9:37:18
  13. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.00
    0.0011622861 = product of:
      0.016272005 = sum of:
        0.016272005 = product of:
          0.03254401 = sum of:
            0.03254401 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.03254401 = score(doc=1490,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 3.2015 9:30:24
  14. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.00
    0.0011622861 = product of:
      0.016272005 = sum of:
        0.016272005 = product of:
          0.03254401 = sum of:
            0.03254401 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.03254401 = score(doc=835,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    29.12.2022 18:22:55
  15. Rieger, F.: Lügende Computer (2023) 0.00
    0.0011622861 = product of:
      0.016272005 = sum of:
        0.016272005 = product of:
          0.03254401 = sum of:
            0.03254401 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.03254401 = score(doc=912,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    16. 3.2023 19:22:55
  16. RWI/PH: Auf der Suche nach dem entscheidenden Wort : die Häufung bestimmter Wörter innerhalb eines Textes macht diese zu Schlüsselwörtern (2012) 0.00
    0.0010682592 = product of:
      0.014955629 = sum of:
        0.014955629 = product of:
          0.029911257 = sum of:
            0.029911257 = weight(_text_:texts in 331) [ClassicSimilarity], result of:
              0.029911257 = score(doc=331,freq=2.0), product of:
                0.16460659 = queryWeight, product of:
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03002521 = queryNorm
                0.18171361 = fieldWeight in 331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=331)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Footnote
    Pressemitteilung zum Artikel: Eduardo G. Altmann, Giampaolo Cristadoro and Mirko Degli Esposti: On the origin of long-range correlations in texts. In: Proceedings of the National Academy of Sciences, 2. Juli 2012. DOI: 10.1073/pnas.1117723109.
  17. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.00
    0.0010170004 = product of:
      0.014238005 = sum of:
        0.014238005 = product of:
          0.02847601 = sum of:
            0.02847601 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.02847601 = score(doc=4184,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 1.2011 10:38:28
  18. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.00
    8.7171455E-4 = product of:
      0.0122040035 = sum of:
        0.0122040035 = product of:
          0.024408007 = sum of:
            0.024408007 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
              0.024408007 = score(doc=1746,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.23214069 = fieldWeight in 1746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 3.2015 9:17:30
  19. Sienel, J.; Weiss, M.; Laube, M.: Sprachtechnologien für die Informationsgesellschaft des 21. Jahrhunderts (2000) 0.00
    7.264289E-4 = product of:
      0.010170003 = sum of:
        0.010170003 = product of:
          0.020340007 = sum of:
            0.020340007 = weight(_text_:22 in 5557) [ClassicSimilarity], result of:
              0.020340007 = score(doc=5557,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.19345059 = fieldWeight in 5557, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5557)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    26.12.2000 13:22:17
  20. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.00
    7.264289E-4 = product of:
      0.010170003 = sum of:
        0.010170003 = product of:
          0.020340007 = sum of:
            0.020340007 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
              0.020340007 = score(doc=734,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.19345059 = fieldWeight in 734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=734)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    19. 7.2002 14:22:31

Years

Languages

Types