Search (1533 results, page 1 of 77)

  • × language_ss:"d"
  1. Ruge, G.: Sprache und Computer : Wortbedeutung und Termassoziation. Methoden zur automatischen semantischen Klassifikation (1995) 0.08
    0.07659837 = product of:
      0.15319674 = sum of:
        0.12775593 = weight(_text_:term in 1534) [ClassicSimilarity], result of:
          0.12775593 = score(doc=1534,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.58325374 = fieldWeight in 1534, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=1534)
        0.025440816 = product of:
          0.05088163 = sum of:
            0.05088163 = weight(_text_:22 in 1534) [ClassicSimilarity], result of:
              0.05088163 = score(doc=1534,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30952093 = fieldWeight in 1534, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1534)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Enthält folgende Kapitel: (1) Motivation; (2) Language philosophical foundations; (3) Structural comparison of extensions; (4) Earlier approaches towards term association; (5) Experiments; (6) Spreading-activation networks or memory models; (7) Perspective. Appendices: Heads and modifiers of 'car'. Glossary. Index. Language and computer. Word semantics and term association. Methods towards an automatic semantic classification
    Footnote
    Rez. in: Knowledge organization 22(1995) no.3/4, S.182-184 (M.T. Rolland)
  2. Schaar, P.: "Ubiquitous Computing" - : lückenhafter Datenschutz? (2006) 0.05
    0.05291442 = product of:
      0.10582884 = sum of:
        0.08992833 = weight(_text_:frequency in 5078) [ClassicSimilarity], result of:
          0.08992833 = score(doc=5078,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.32531026 = fieldWeight in 5078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5078)
        0.015900511 = product of:
          0.031801023 = sum of:
            0.031801023 = weight(_text_:22 in 5078) [ClassicSimilarity], result of:
              0.031801023 = score(doc=5078,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19345059 = fieldWeight in 5078, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5078)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Die Miniaturisierung der Informations- und Kommunikationstechnik dient der Verbesserung unserer Lebens- und Arbeitsbedingungen. Der Einsatz von technischen Systemen muss transparent und unter Wahrung des Selbstbestimmungsrechts der Betroffenen erfolgen. Die Informations- und Kommunikationstechnik entwickelt sich permanent weiter: Die Rechenleistung und Vernetzungsdichte der Systeme steigt, die Übertragungsbandbreite und Speicherkapazität nimmt zu und die Komponenten werden immer kleiner. Besonders die Miniaturisierung hat zu Visionen über neue Einsatzfelder für IT-Systeme geführt. Die mit den Schlagworten "Pervasive Computing", "Ubiquitous Computing" und "Ambient Intelligence" verbundenen Konzepte führen zu miniaturisierten IT-Systemen, die unsere Alltagswelt durchdringen, ohne das sie noch als "Computer" erkannt werden. Trends, die diese Entwicklung vorantreiben, sind etwa - leistungsfähigere, kleinere Prozessoren und Speicherbausteine, - höhere Integration der Netze (UMTS, WiMax, GSM, WLAN, Bluetooth) mit neuen Diensten etwa zur spontanen Vernetzung von IT-Systemen, sowie - neue Sensoren, langlebige und sehr kleine Batterien. Auch sollen durch die Forschung in der Nanotechnologie neue Produkte - etwa in der Medizin - entwickelt werden, die durch eindeutige Kennungen in den Nano-Partikeln spätere Rückverfolgung oder Identifizierung der "Träger" ermöglichen könnten. Bereits heute wird durch neue IT-Systeme der Einsatz von Informationstechnik weitgehend unsichtbar, z. B. wenn Mikroprozessoren in Alltagsgegenstände integriert sind. Mit der Radio Frequency Identification (RFID) rückt die Vision dieser allgegenwärtigen Datenverarbeitung näher. Dies bringt neue Gefahren für die Persönlichkeitsrechte der Bürgerinnen und Bürger mit sich. Daher sind Konzepte zum Schutz der Privatsphäre gefragt, die bereits beim Systementwurf greifen und nicht erst nachträglich "aufgepfropft" werden. Nachträglicher Datenschutz ist nicht nur weniger effektiv, sondern auch teuerer als eingebauter (System-) Datenschutz.
    Source
    Wechselwirkung. 28(2006) Nr.136, S.22-25
  3. Steinmetz, R.: Elektronisches Lexikon auf Personalcomputer (1989) 0.05
    0.04516854 = product of:
      0.18067417 = sum of:
        0.18067417 = weight(_text_:term in 1488) [ClassicSimilarity], result of:
          0.18067417 = score(doc=1488,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.8248453 = fieldWeight in 1488, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.125 = fieldNorm(doc=1488)
      0.25 = coord(1/4)
    
    Object
    Term-PC
  4. Ma, N.; Zheng, H.T.; Xiao, X.: ¬An ontology-based latent semantic indexing approach using long short-term memory networks (2017) 0.04
    0.044085447 = product of:
      0.08817089 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 3810) [ClassicSimilarity], result of:
              0.033293735 = score(doc=3810,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 3810, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3810)
          0.25 = coord(1/4)
        0.07984746 = weight(_text_:term in 3810) [ClassicSimilarity], result of:
          0.07984746 = score(doc=3810,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.3645336 = fieldWeight in 3810, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3810)
      0.5 = coord(2/4)
    
    Abstract
    Nowadays, online data shows an astonishing increase and the issue of semantic indexing remains an open question. Ontologies and knowledge bases have been widely used to optimize performance. However, researchers are placing increased emphasis on internal relations of ontologies but neglect latent semantic relations between ontologies and documents. They generally annotate instances mentioned in documents, which are related to concepts in ontologies. In this paper, we propose an Ontology-based Latent Semantic Indexing approach utilizing Long Short-Term Memory networks (LSTM-OLSI). We utilize an importance-aware topic model to extract document-level semantic features and leverage ontologies to extract word-level contextual features. Then we encode the above two levels of features and match their embedding vectors utilizing LSTM networks. Finally, the experimental results reveal that LSTM-OLSI outperforms existing techniques and demonstrates deep comprehension of instances and articles.
  5. Gerzymisch-Arbogast, H.: Termini im Kontext : Verfahren zur Erschließung und Übersetzung der textspezifischen Bedeutung von fachlichenAusdrücken (1996) 0.04
    0.039117105 = product of:
      0.15646842 = sum of:
        0.15646842 = weight(_text_:term in 14) [ClassicSimilarity], result of:
          0.15646842 = score(doc=14,freq=6.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.71433705 = fieldWeight in 14, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=14)
      0.25 = coord(1/4)
    
    Content
    Enthält die Kapitel: On the status of the term as a systematic unit; the context-specific term model: theory and exemplifying application; theoretical differentiations and application problems; the ideally used term and possible contaminations in the context; naming contaminations; conceptual contaminations; one-dimensional and multidimensional contaminations in context; on the translation of terms in context
  6. Sparber, S.: What's the frequency, Kenneth? : eine (queer)feministische Kritik an Sexismen und Rassismen im Schlagwortkatalog (2016) 0.04
    0.035971332 = product of:
      0.14388533 = sum of:
        0.14388533 = weight(_text_:frequency in 3142) [ClassicSimilarity], result of:
          0.14388533 = score(doc=3142,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.5204964 = fieldWeight in 3142, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0625 = fieldNorm(doc=3142)
      0.25 = coord(1/4)
    
  7. Jörs, B.: ¬Ein kleines Fach zwischen "Daten" und "Wissen" II : Anmerkungen zum (virtuellen) "16th International Symposium of Information Science" (ISI 2021", Regensburg) (2021) 0.04
    0.035897866 = product of:
      0.14359146 = sum of:
        0.14359146 = sum of:
          0.11179045 = weight(_text_:assessment in 330) [ClassicSimilarity], result of:
            0.11179045 = score(doc=330,freq=4.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.43132967 = fieldWeight in 330, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=330)
          0.031801023 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
            0.031801023 = score(doc=330,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.19345059 = fieldWeight in 330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=330)
      0.25 = coord(1/4)
    
    Abstract
    Nur noch Informationsethik, Informationskompetenz und Information Assessment? Doch gerade die Abschottung von anderen Disziplinen verstärkt die Isolation des "kleinen Faches" Informationswissenschaft in der Scientific Community. So bleiben ihr als letzte "eigenständige" Forschungsrandgebiete nur die, die Wolf Rauch als Keynote Speaker bereits in seinem einführenden, historisch-genetischen Vortrag zur Lage der Informationswissenschaft auf der ISI 2021 benannt hat: "Wenn die universitäre Informationswissenschaft (zumindest in Europa) wohl kaum eine Chance hat, im Bereich der Entwicklung von Systemen und Anwendungen wieder an die Spitze der Entwicklung vorzustoßen, bleiben ihr doch Gebiete, in denen ihr Beitrag in der kommenden Entwicklungsphase dringend erforderlich sein wird: Informationsethik, Informationskompetenz, Information Assessment" (Wolf Rauch: Was aus der Informationswissenschaft geworden ist; in: Thomas Schmidt; Christian Wolff (Eds): Information between Data and Knowledge. Schriften zur Informationswissenschaft 74, Regensburg, 2021, Seiten 20-22 - siehe auch die Rezeption des Beitrages von Rauch durch Johannes Elia Panskus, Was aus der Informationswissenschaft geworden ist. Sie ist in der Realität angekommen, in: Open Password, 17. März 2021). Das ist alles? Ernüchternd.
  8. Sandner, M.: Entwicklung der SWD-Arbeit in Österreich (2008) 0.03
    0.034293205 = product of:
      0.06858641 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 2188) [ClassicSimilarity], result of:
              0.018833783 = score(doc=2188,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 2188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2188)
          0.25 = coord(1/4)
        0.06387796 = weight(_text_:term in 2188) [ClassicSimilarity], result of:
          0.06387796 = score(doc=2188,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.29162687 = fieldWeight in 2188, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=2188)
      0.5 = coord(2/4)
    
    Abstract
    This article focuses on the use of the German language subject headings authority file SWD (Schlagwortnormdatei) in Austria and outlines how Austrian academic libraries' employment of the SWD developed in active cooperation with their SWD partners. The Austrian subject indexing practice turned to the SWD terminology based on the newly published German subject indexing rules RSWK (Regeln für den Schlagwortkatalog) in the late 1980s. An electronic workflow was developed. Soon it became necessary to provide a data pool for new terms originally created by Austrian member libraries and to connect these data with the SWD source data (ÖSWD, 1991). Internal cooperation structures developed when local SWD editorial departments began to exist. As of 1994 a central editor was nominated to serve as direct link between active Austrian SWD users and SWD partners and the German National Library (DNB). Unfortunately the first active SWD period was followed by a long term vacancy due to the first central editor's early retirement. Nearly all functional and information structures stopped functioning while local data increased on a daily basis... In 2004 a new central ÖSWD editor was nominated, whose first task it was to rebuild structures, to motivate local editors as well as terminology experts in Austria, to create a communication network for exchanging information and to cooperate efficiently with the DNB and Austria's SWD partners. The great number of old data and term duplicates and the special role of personal names as subject authority data in the Austrian library system meant that newly created and older or reused terms had to be marked in a special way to allow for better segmentation and revision. Now, in 2008, the future of Austrian SWD use looks bright. Problems will continue to be overcome as the forthcoming new online editing process for authority files provides new challenges.
  9. Nicoletti, M.: Automatische Indexierung (2001) 0.03
    0.033876404 = product of:
      0.13550562 = sum of:
        0.13550562 = weight(_text_:term in 4326) [ClassicSimilarity], result of:
          0.13550562 = score(doc=4326,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.618634 = fieldWeight in 4326, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.09375 = fieldNorm(doc=4326)
      0.25 = coord(1/4)
    
    Content
    Inhalt: 1. Aufgabe - 2. Ermittlung von Mehrwortgruppen - 2.1 Definition - 3. Kennzeichnung der Mehrwortgruppen - 4. Grundformen - 5. Term- und Dokumenthäufigkeit --- Termgewichtung - 6. Steuerungsinstrument Schwellenwert - 7. Invertierter Index. Vgl. unter: http://www.grin.com/de/e-book/104966/automatische-indexierung.
  10. Fachsystematik Bremen nebst Schlüssel 1970 ff. (1970 ff) 0.03
    0.031249886 = product of:
      0.062499773 = sum of:
        0.04659926 = product of:
          0.18639705 = sum of:
            0.18639705 = weight(_text_:3a in 3577) [ClassicSimilarity], result of:
              0.18639705 = score(doc=3577,freq=2.0), product of:
                0.39798802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04694356 = queryNorm
                0.46834838 = fieldWeight in 3577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3577)
          0.25 = coord(1/4)
        0.015900511 = product of:
          0.031801023 = sum of:
            0.031801023 = weight(_text_:22 in 3577) [ClassicSimilarity], result of:
              0.031801023 = score(doc=3577,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19345059 = fieldWeight in 3577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3577)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    1. Agrarwissenschaften 1981. - 3. Allgemeine Geographie 2.1972. - 3a. Allgemeine Naturwissenschaften 1.1973. - 4. Allgemeine Sprachwissenschaft, Allgemeine Literaturwissenschaft 2.1971. - 6. Allgemeines. 5.1983. - 7. Anglistik 3.1976. - 8. Astronomie, Geodäsie 4.1977. - 12. bio Biologie, bcp Biochemie-Biophysik, bot Botanik, zoo Zoologie 1981. - 13. Bremensien 3.1983. - 13a. Buch- und Bibliothekswesen 3.1975. - 14. Chemie 4.1977. - 14a. Elektrotechnik 1974. - 15 Ethnologie 2.1976. - 16,1. Geowissenschaften. Sachteil 3.1977. - 16,2. Geowissenschaften. Regionaler Teil 3.1977. - 17. Germanistik 6.1984. - 17a,1. Geschichte. Teilsystematik hil. - 17a,2. Geschichte. Teilsystematik his Neuere Geschichte. - 17a,3. Geschichte. Teilsystematik hit Neueste Geschichte. - 18. Humanbiologie 2.1983. - 19. Ingenieurwissenschaften 1974. - 20. siehe 14a. - 21. klassische Philologie 3.1977. - 22. Klinische Medizin 1975. - 23. Kunstgeschichte 2.1971. - 24. Kybernetik. 2.1975. - 25. Mathematik 3.1974. - 26. Medizin 1976. - 26a. Militärwissenschaft 1985. - 27. Musikwissenschaft 1978. - 27a. Noten 2.1974. - 28. Ozeanographie 3.1977. -29. Pädagogik 8.1985. - 30. Philosphie 3.1974. - 31. Physik 3.1974. - 33. Politik, Politische Wissenschaft, Sozialwissenschaft. Soziologie. Länderschlüssel. Register 1981. - 34. Psychologie 2.1972. - 35. Publizistik und Kommunikationswissenschaft 1985. - 36. Rechtswissenschaften 1986. - 37. Regionale Geograpgie 3.1975. - 37a. Religionswissenschaft 1970. - 38. Romanistik 3.1976. - 39. Skandinavistik 4.1985. - 40. Slavistik 1977. - 40a. Sonstige Sprachen und Literaturen 1973. - 43. Sport 4.1983. - 44. Theaterwissenschaft 1985. - 45. Theologie 2.1976. - 45a. Ur- und Frühgeschichte, Archäologie 1970. - 47. Volkskunde 1976. - 47a. Wirtschaftswissenschaften 1971 // Schlüssel: 1. Länderschlüssel 1971. - 2. Formenschlüssel (Kurzform) 1974. - 3. Personenschlüssel Literatur 5. Fassung 1968
  11. Sprachtechnologie, mobile Kommunikation und linguistische Ressourcen : Beiträge zur GLDV Tagung 2005 in Bonn (2005) 0.03
    0.02947553 = product of:
      0.05895106 = sum of:
        0.0049940604 = product of:
          0.019976242 = sum of:
            0.019976242 = weight(_text_:based in 3578) [ClassicSimilarity], result of:
              0.019976242 = score(doc=3578,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.14123408 = fieldWeight in 3578, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3578)
          0.25 = coord(1/4)
        0.053957 = weight(_text_:frequency in 3578) [ClassicSimilarity], result of:
          0.053957 = score(doc=3578,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.19518617 = fieldWeight in 3578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
      0.5 = coord(2/4)
    
    Content
    INHALT: Chris Biemann/Rainer Osswald: Automatische Erweiterung eines semantikbasierten Lexikons durch Bootstrapping auf großen Korpora - Ernesto William De Luca/Andreas Nürnberger: Supporting Mobile Web Search by Ontology-based Categorization - Rüdiger Gleim: HyGraph - Ein Framework zur Extraktion, Repräsentation und Analyse webbasierter Hypertextstrukturen - Felicitas Haas/Bernhard Schröder: Freges Grundgesetze der Arithmetik: Dokumentbaum und Formelwald - Ulrich Held/ Andre Blessing/Bettina Säuberlich/Jürgen Sienel/Horst Rößler/Dieter Kopp: A personalized multimodal news service -Jürgen Hermes/Christoph Benden: Fusion von Annotation und Präprozessierung als Vorschlag zur Behebung des Rohtextproblems - Sonja Hüwel/Britta Wrede/Gerhard Sagerer: Semantisches Parsing mit Frames für robuste multimodale Mensch-Maschine-Kommunikation - Brigitte Krenn/Stefan Evert: Separating the wheat from the chaff- Corpus-driven evaluation of statistical association measures for collocation extraction - Jörn Kreutel: An application-centered Perspective an Multimodal Dialogue Systems - Jonas Kuhn: An Architecture for Prallel Corpusbased Grammar Learning - Thomas Mandl/Rene Schneider/Pia Schnetzler/Christa Womser-Hacker: Evaluierung von Systemen für die Eigennamenerkennung im crosslingualen Information Retrieval - Alexander Mehler/Matthias Dehmer/Rüdiger Gleim: Zur Automatischen Klassifikation von Webgenres - Charlotte Merz/Martin Volk: Requirements for a Parallel Treebank Search Tool - Sally YK. Mok: Multilingual Text Retrieval an the Web: The Case of a Cantonese-Dagaare-English Trilingual e-Lexicon -
    Darja Mönke: Ein Parser für natürlichsprachlich formulierte mathematische Beweise - Martin Müller: Ontologien für mathematische Beweistexte - Moritz Neugebauer: The status of functional phonological classification in statistical speech recognition - Uwe Quasthoff: Kookkurrenzanalyse und korpusbasierte Sachgruppenlexikographie - Reinhard Rapp: On the Relationship between Word Frequency and Word Familiarity - Ulrich Schade/Miloslaw Frey/Sebastian Becker: Computerlinguistische Anwendungen zur Verbesserung der Kommunikation zwischen militärischen Einheiten und deren Führungsinformationssystemen - David Schlangen/Thomas Hanneforth/Manfred Stede: Weaving the Semantic Web: Extracting and Representing the Content of Pathology Reports - Thomas Schmidt: Modellbildung und Modellierungsparadigmen in der computergestützten Korpuslinguistik - Sabine Schröder/Martina Ziefle: Semantic transparency of cellular phone menus - Thorsten Trippel/Thierry Declerck/Ulrich Held: Standardisierung von Sprachressourcen: Der aktuelle Stand - Charlotte Wollermann: Evaluation der audiovisuellen Kongruenz bei der multimodalen Sprachsynsthese - Claudia Kunze/Lothar Lemnitzer: Anwendungen des GermaNet II: Einleitung - Claudia Kunze/Lothar Lemnitzer: Die Zukunft der Wortnetze oder die Wortnetze der Zukunft - ein Roadmap-Beitrag -
    Karel Pala: The Balkanet Experience - Peter M. Kruse/Andre Nauloks/Dietmar Rösner/Manuela Kunze: Clever Search: A WordNet Based Wrapper for Internet Search Engines - Rosmary Stegmann/Wolfgang Woerndl: Using GermaNet to Generate Individual Customer Profiles - Ingo Glöckner/Sven Hartrumpf/Rainer Osswald: From GermaNet Glosses to Formal Meaning Postulates -Aljoscha Burchardt/ Katrin Erk/Anette Frank: A WordNet Detour to FrameNet - Daniel Naber: OpenThesaurus: ein offenes deutsches Wortnetz - Anke Holler/Wolfgang Grund/Heinrich Petith: Maschinelle Generierung assoziativer Termnetze für die Dokumentensuche - Stefan Bordag/Hans Friedrich Witschel/Thomas Wittig: Evaluation of Lexical Acquisition Algorithms - Iryna Gurevych/Hendrik Niederlich: Computing Semantic Relatedness of GermaNet Concepts - Roland Hausser: Turn-taking als kognitive Grundmechanik der Datenbanksemantik - Rodolfo Delmonte: Parsing Overlaps - Melanie Twiggs: Behandlung des Passivs im Rahmen der Datenbanksemantik- Sandra Hohmann: Intention und Interaktion - Anmerkungen zur Relevanz der Benutzerabsicht - Doris Helfenbein: Verwendung von Pronomina im Sprecher- und Hörmodus - Bayan Abu Shawar/Eric Atwell: Modelling turn-taking in a corpus-trained chatbot - Barbara März: Die Koordination in der Datenbanksemantik - Jens Edlund/Mattias Heldner/Joakim Gustafsson: Utterance segmentation and turn-taking in spoken dialogue systems - Ekaterina Buyko: Numerische Repräsentation von Textkorpora für Wissensextraktion - Bernhard Fisseni: ProofML - eine Annotationssprache für natürlichsprachliche mathematische Beweise - Iryna Schenk: Auflösung der Pronomen mit Nicht-NP-Antezedenten in spontansprachlichen Dialogen - Stephan Schwiebert: Entwurf eines agentengestützten Systems zur Paradigmenbildung - Ingmar Steiner: On the analysis of speech rhythm through acoustic parameters - Hans Friedrich Witschel: Text, Wörter, Morpheme - Möglichkeiten einer automatischen Terminologie-Extraktion.
  12. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.03
    0.027959555 = product of:
      0.11183822 = sum of:
        0.11183822 = product of:
          0.4473529 = sum of:
            0.4473529 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.4473529 = score(doc=973,freq=2.0), product of:
                0.39798802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04694356 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  13. Stock, W.G.: Wissenschaftliche Informationen - metawissenschaftlich betrachtet : eine Theorie der wissenschaftlichen Information (1980) 0.03
    0.027946608 = product of:
      0.11178643 = sum of:
        0.11178643 = weight(_text_:term in 182) [ClassicSimilarity], result of:
          0.11178643 = score(doc=182,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.510347 = fieldWeight in 182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=182)
      0.25 = coord(1/4)
    
    Abstract
    Thema der Untersuchung ist die meta-wissenschaftliche Betrachtung von Informationen in den Wissenschaften. ... Der grundlegende Term "Information" wird so allgemein definiert, daß alle bis heute bekannten Definitionsvarianten (die oftmasl disziplinspezifisch ausgerichtet sind) aus diesem Term ableitbar sind. "Information" wird dabei als das Gesamt von "Signal" (materieller Aspekt) und "Informen" (ideeller Aspekt) betrachtet.
  14. Dirks, H.: Lernen im Internet oder mit Gedrucktem? : Eine Untersuchung zeigt: Fernunterrichts-Teilnehmer wollen beides! (2002) 0.03
    0.026143279 = product of:
      0.052286558 = sum of:
        0.014125337 = product of:
          0.056501348 = sum of:
            0.056501348 = weight(_text_:based in 1512) [ClassicSimilarity], result of:
              0.056501348 = score(doc=1512,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.39947033 = fieldWeight in 1512, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1512)
          0.25 = coord(1/4)
        0.038161222 = product of:
          0.076322444 = sum of:
            0.076322444 = weight(_text_:22 in 1512) [ClassicSimilarity], result of:
              0.076322444 = score(doc=1512,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.46428138 = fieldWeight in 1512, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1512)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    11. 8.2002 15:05:22
    Theme
    Computer Based Training
  15. Witschel, H.F.: Global and local resources for peer-to-peer text retrieval (2008) 0.02
    0.022674438 = product of:
      0.045348875 = sum of:
        0.0058264043 = product of:
          0.023305617 = sum of:
            0.023305617 = weight(_text_:based in 127) [ClassicSimilarity], result of:
              0.023305617 = score(doc=127,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.1647731 = fieldWeight in 127, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=127)
          0.25 = coord(1/4)
        0.039522473 = weight(_text_:term in 127) [ClassicSimilarity], result of:
          0.039522473 = score(doc=127,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.18043491 = fieldWeight in 127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
      0.5 = coord(2/4)
    
    Abstract
    This thesis is organised as follows: Chapter 2 gives a general introduction to the field of information retrieval, covering its most important aspects. Further, the tasks of distributed and peer-to-peer information retrieval (P2PIR) are introduced, motivating their application and characterising the special challenges that they involve, including a review of existing architectures and search protocols in P2PIR. Finally, chapter 2 presents approaches to evaluating the e ectiveness of both traditional and peer-to-peer IR systems. Chapter 3 contains a detailed account of state-of-the-art information retrieval models and algorithms. This encompasses models for matching queries against document representations, term weighting algorithms, approaches to feedback and associative retrieval as well as distributed retrieval. It thus defines important terminology for the following chapters. The notion of "multi-level association graphs" (MLAGs) is introduced in chapter 4. An MLAG is a simple, graph-based framework that allows to model most of the theoretical and practical approaches to IR presented in chapter 3. Moreover, it provides an easy-to-grasp way of defining and including new entities into IR modeling, such as paragraphs or peers, dividing them conceptually while at the same time connecting them to each other in a meaningful way. This allows for a unified view on many IR tasks, including that of distributed and peer-to-peer search. Starting from related work and a formal defiition of the framework, the possibilities of modeling that it provides are discussed in detail, followed by an experimental section that shows how new insights gained from modeling inside the framework can lead to novel combinations of principles and eventually to improved retrieval effectiveness.
    Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. What is examined, is the question of how we can obtain such global statistics and to what extent their use will lead to a drop in retrieval effectiveness. In chapter 6, the second research question is tackled, namely that of making forwarding decisions for queries, based on profiles of other peers. After a review of related work in that area, the chapter first defines the approaches that will be compared against each other. Then, a novel evaluation framework is introduced, including a new measure for comparing results of a distributed search engine against those of a centralised one. Finally, the actual evaluation is performed using the new framework.
  16. Neet, H.: Assoziationsrelationen in Dokumentationslexika für die verbale Sacherschließung (1984) 0.02
    0.02258427 = product of:
      0.09033708 = sum of:
        0.09033708 = weight(_text_:term in 1254) [ClassicSimilarity], result of:
          0.09033708 = score(doc=1254,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 1254, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=1254)
      0.25 = coord(1/4)
    
    Abstract
    Thesauri und Dokumentationslexika können als Varianten von onomasiologischen Wörterbüchern aufgefaßt werden, deren besonderes Interesse für die Linguistik darin besteht, daß Äquivalenz-, Hierarchie- und Assoziationsrelationen angegegeben werden. Regelwerke und Beiträge werden besprochen, die sich mit der Ausweisung von "verwandten" Begriffen in der bibliothekarischen und dokumentarischen Praxis befassen. Belege zu Musterbeispielen von "siehe auch"- und "related term"-Verweisungen werden anhand von drei deutschsprachigen Schlagwortregistern aufgelistet. Die Assoziationsrelationen werden in paradigmatische und syntagmatische Beziehungen eingeteilt. Auch Gruppierungen nach Begriffsfeldern und Assoziationsfeldern sind möglich. Untersuchungen von Assoziationsrelationen im Sachbereich "Buchwesen" bestätigen die Vermutung, daß die Mehrzahl der Verweisungen das gemeinsame Vorkommen bestimmter Begriffe in typischen Kontexten der außersprachlichen Wirklichkeit betrifft.
  17. Panyr, J.: Vektorraum-Modell und Clusteranalyse in Information-Retrieval-Systemen (1987) 0.02
    0.02258427 = product of:
      0.09033708 = sum of:
        0.09033708 = weight(_text_:term in 2322) [ClassicSimilarity], result of:
          0.09033708 = score(doc=2322,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 2322, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=2322)
      0.25 = coord(1/4)
    
    Abstract
    Ausgehend von theoretischen Indexierungsansätzen wird das klassische Vektorraum-Modell für automatische Indexierung (mit dem Trennschärfen-Modell) erläutert. Das Clustering in Information-Retrieval-Systemem wird als eine natürliche logische Folge aus diesem Modell aufgefaßt und in allen seinen Ausprägungen (d.h. als Dokumenten-, Term- oder Dokumenten- und Termklassifikation) behandelt. Anschließend werden die Suchstrategien in vorklassifizierten Dokumentenbeständen (Clustersuche) detailliert beschrieben. Zum Schluß wird noch die sinnvolle Anwendung der Clusteranalyse in Information-Retrieval-Systemen kurz diskutiert
  18. Joint INIS/ETDE Thesaurus (Rev. 2) April 2007 (2007) 0.02
    0.022093736 = product of:
      0.08837494 = sum of:
        0.08837494 = weight(_text_:term in 644) [ClassicSimilarity], result of:
          0.08837494 = score(doc=644,freq=10.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.40346476 = fieldWeight in 644, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.02734375 = fieldNorm(doc=644)
      0.25 = coord(1/4)
    
    Content
    ""A thesaurus is a terminological control device used in translating from the natural language of documents, indexers or users into a more constrained `system language' (document language, information language)". It is also "a controlled and dynamic vocabulary of semantically and generically related terms which covers a specific domain of knowledge". The Joint INIS/EDTE Thesaurus fits this definition adopted by UNESCO.' The domain of knowledge covered by the Joint INIS/ETDE Thesaurus includes physics (in particular, plasma physics, atomic and molecular physics, and especially nuclear and high-energy physics), chemistry, materials science, earth sciences, radiation biology, radioisotope effects and kinetics, applied life sciences, radiology and nuclear medicine, isotope and radiation source technology, radiation protection, radiation applications, engineering, instrumentation, fossil fuels, synthetic fuels, renewable energy sources, advanced energy systems, fission and fusion reactor technology, safeguards and inspection, waste management, environmental aspects of the production and consumption of energy from nuclear and non-nuclear sources, energy efficiency and energy conservation, economics and sociology of energy production and use, energy policy, and nuclear law. The terms in the Joint Thesaurus are listed alphabetically. For each alphabetical entry, a "word block", containing the terms associated with this particular entry, is displayed. In the word block, terms that have a hierarchical relationship to the entry are identified by the symbols BT for Broader Term, and NT for Narrower Term; a term with an affinitive relationship is identified by RT, for Related Term; terms with a preferential relationship are identified by USE or SEE, and OF for Used For, and SF for Seen For. In case of multiple USE relationships for a forbidden term, all listed descriptors should be used to index or search a given concept. In case of multiple SEE relationships, one or more of the listed descriptors should be considered for indexing or searching this concept."
  19. Publishers go head-to-head over search tools : Elsevier's Scopus (2004) 0.02
    0.021821182 = product of:
      0.043642364 = sum of:
        0.00411989 = product of:
          0.01647956 = sum of:
            0.01647956 = weight(_text_:based in 2496) [ClassicSimilarity], result of:
              0.01647956 = score(doc=2496,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.11651218 = fieldWeight in 2496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2496)
          0.25 = coord(1/4)
        0.039522473 = weight(_text_:term in 2496) [ClassicSimilarity], result of:
          0.039522473 = score(doc=2496,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.18043491 = fieldWeight in 2496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2496)
      0.5 = coord(2/4)
    
    Content
    "Will there ever be a science equivalent of Google? Two of the world's biggest science publishing and information firms seem to think that there will. They are about to compete head-to-head to create the most popular tool for searching the scientific literature. Elsevier, the Amsterdam-based publisher of more than 1,800 journals, has announced that this autumn it will launch Scopus, an online search engine covering abstracts and references from 14,000 scientific journals. Scopus will arrive as a direct competitor for the established Web of Science, owned by Thomson ISI of Philadelphia, the scientific information specialist. "Scopus will definitely be a threat to ISI," says one science publishing expert, who asked not to be named. "But ISI will not just let this happen. There will be some kind of arms race in terms of adding new features." Many researchers are already wedded to subject-specific databases of scientific information, such as PubMed, for biomedical research. But Web of Science is currently the only service to cover the full spectrum of scientific disciplines and publications. It can also generate the citation statistics that are sometimes used to measure the quality ofjournals and individual papers. ISI, which is widely used by libraries worldwide, may be hard to displace. It covers fewer than 9,000 journals, but it has been available in its present form since 1997 and includes a 60-year archive of papers. Thomson ISI says it will extend this to 105 years by the end of 2005. The company also owns the only extensive database an patent abstracts.
    Elsevier cannot hope to match this coverage in the short term. The company has been able to draw an its experience of running biomedical and pharmaceutical databases, and developers began compiling a multidisciplinary index two years ago. Even so, when it launches, Scopus will index only five years of references far some journals, rising to ten years during 2005. Data an abstracts will go back further, in some cases to the mid-1960s. Because Scopus has been built from scratch, Elsevier has been able to work with librarians to develop an alternative to the Web of Science interface, which has been criticized by some users. "Users are very happy with Scopus," says Steven Gheyselinck, a librarian at the University of Lausanne in Switzerland who has been testing it. Although Scopus and Web of Science are the only products aiming to cover all of science, other search engines are also under development. The Google of science could end up being Google itself the company has collaborated with nine publishers, including Nature Publishing Group, to create an engine called CrossRef Search. This service, a pilot of which appeared last month, allows users to search digital versions of all papers held by the publishers involved and returns links to articles an their websites. Unlike Web of Science and Scopus, which scan through the titles and abstracts of articles, CrossRef Search also searches the full text of papers. Many of the other 300 or so members of CrossRef - a publishers' collaboration established to allow easier linking between citations - are likely to join the service if the pilot is successful."
  20. Rutz, R.: Positionen und Pläne der DFG zum Thema Virtuelle Fachbibliothek (1998) 0.02
    0.019761236 = product of:
      0.079044946 = sum of:
        0.079044946 = weight(_text_:term in 5133) [ClassicSimilarity], result of:
          0.079044946 = score(doc=5133,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.36086982 = fieldWeight in 5133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5133)
      0.25 = coord(1/4)
    
    Abstract
    Part 2 of the 1998 Memorandum concerning the further development of literature supply on a national level. Indicates trends in the field of information and communication technology which will impact the whole field of national information provision especially those concerning enhanced responsibility for libraries involved in the special collection programme of the Deutsche Forschungsgemeinschaft (DFG) (German Research Association). Discusses the following imperatives: the integration of digital and printed materials; procedures for making them available and accessible; and the responsibility for long term archiving. Outlines proposals for new services, for the enhancement of existing services and for their coordination. Provides an overview of DFG initiatives for the support and financing of appropriate projects

Languages

Types

  • a 1181
  • m 232
  • el 96
  • s 56
  • x 41
  • i 21
  • ? 8
  • b 6
  • r 6
  • d 3
  • p 2
  • u 2
  • au 1
  • n 1
  • More… Less…

Themes

Subjects

Classifications