Search (54 results, page 1 of 3)

  • × theme_ss:"Automatisches Indexieren"
  1. RIAO 91 : Computer aided information retrieval. Conference, Barcelona, 2.-4.5.1991 (1991) 0.05
    0.048199534 = product of:
      0.24099767 = sum of:
        0.24099767 = weight(_text_:91 in 4651) [ClassicSimilarity], result of:
          0.24099767 = score(doc=4651,freq=2.0), product of:
            0.19572705 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.03512561 = queryNorm
            1.2312946 = fieldWeight in 4651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.15625 = fieldNorm(doc=4651)
      0.2 = coord(1/5)
    
  2. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.03
    0.033942997 = product of:
      0.08485749 = sum of:
        0.06820087 = weight(_text_:b in 262) [ClassicSimilarity], result of:
          0.06820087 = score(doc=262,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.54802394 = fieldWeight in 262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.109375 = fieldNorm(doc=262)
        0.016656622 = product of:
          0.06662649 = sum of:
            0.06662649 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.06662649 = score(doc=262,freq=2.0), product of:
                0.1230039 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03512561 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Date
    20.10.2000 12:22:23
  3. Silvester, J.P.: Computer supported indexing : a history and evaluation of NASA's MAI system (1998) 0.03
    0.033739675 = product of:
      0.16869837 = sum of:
        0.16869837 = weight(_text_:91 in 1302) [ClassicSimilarity], result of:
          0.16869837 = score(doc=1302,freq=2.0), product of:
            0.19572705 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.03512561 = queryNorm
            0.86190623 = fieldWeight in 1302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.109375 = fieldNorm(doc=1302)
      0.2 = coord(1/5)
    
    Pages
    S.76-91
  4. Kutschekmanesch, S.; Lutes, B.; Moelle, K.; Thiel, U.; Tzeras, K.: Automated multilingual indexing : a synthesis of rule-based and thesaurus-based methods (1998) 0.02
    0.024245 = product of:
      0.0606125 = sum of:
        0.04871491 = weight(_text_:b in 4157) [ClassicSimilarity], result of:
          0.04871491 = score(doc=4157,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.3914457 = fieldWeight in 4157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=4157)
        0.011897588 = product of:
          0.047590353 = sum of:
            0.047590353 = weight(_text_:22 in 4157) [ClassicSimilarity], result of:
              0.047590353 = score(doc=4157,freq=2.0), product of:
                0.1230039 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03512561 = queryNorm
                0.38690117 = fieldWeight in 4157, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4157)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Source
    Information und Märkte: 50. Deutscher Dokumentartag 1998, Kongreß der Deutschen Gesellschaft für Dokumentation e.V. (DGD), Rheinische Friedrich-Wilhelms-Universität Bonn, 22.-24. September 1998. Hrsg. von Marlies Ockenfeld u. Gerhard J. Mantwill
  5. Nohr, H.: Grundlagen der automatischen Indexierung : ein Lehrbuch (2003) 0.02
    0.02118343 = product of:
      0.05295857 = sum of:
        0.048199534 = weight(_text_:91 in 1767) [ClassicSimilarity], result of:
          0.048199534 = score(doc=1767,freq=2.0), product of:
            0.19572705 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.03512561 = queryNorm
            0.24625893 = fieldWeight in 1767, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.03125 = fieldNorm(doc=1767)
        0.004759035 = product of:
          0.01903614 = sum of:
            0.01903614 = weight(_text_:22 in 1767) [ClassicSimilarity], result of:
              0.01903614 = score(doc=1767,freq=2.0), product of:
                0.1230039 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03512561 = queryNorm
                0.15476047 = fieldWeight in 1767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1767)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Date
    22. 6.2009 12:46:51
    Footnote
    Rez. in: nfd 54(2003) H.5, S.314 (W. Ratzek): "Um entscheidungsrelevante Daten aus der ständig wachsenden Flut von mehr oder weniger relevanten Dokumenten zu extrahieren, müssen Unternehmen, öffentliche Verwaltung oder Einrichtungen der Fachinformation effektive und effiziente Filtersysteme entwickeln, einsetzen und pflegen. Das vorliegende Lehrbuch von Holger Nohr bietet erstmalig eine grundlegende Einführung in das Thema "automatische Indexierung". Denn: "Wie man Information sammelt, verwaltet und verwendet, wird darüber entscheiden, ob man zu den Gewinnern oder Verlierern gehört" (Bill Gates), heißt es einleitend. Im ersten Kapitel "Einleitung" stehen die Grundlagen im Mittelpunkt. Die Zusammenhänge zwischen Dokumenten-Management-Systeme, Information Retrieval und Indexierung für Planungs-, Entscheidungs- oder Innovationsprozesse, sowohl in Profit- als auch Non-Profit-Organisationen werden beschrieben. Am Ende des einleitenden Kapitels geht Nohr auf die Diskussion um die intellektuelle und automatische Indexierung ein und leitet damit über zum zweiten Kapitel "automatisches Indexieren. Hier geht der Autor überblickartig unter anderem ein auf - Probleme der automatischen Sprachverarbeitung und Indexierung - verschiedene Verfahren der automatischen Indexierung z.B. einfache Stichwortextraktion / Volltextinvertierung, - statistische Verfahren, Pattern-Matching-Verfahren. Die "Verfahren der automatischen Indexierung" behandelt Nohr dann vertiefend und mit vielen Beispielen versehen im umfangreichsten dritten Kapitel. Das vierte Kapitel "Keyphrase Extraction" nimmt eine Passpartout-Status ein: "Eine Zwischenstufe auf dem Weg von der automatischen Indexierung hin zur automatischen Generierung textueller Zusammenfassungen (Automatic Text Summarization) stellen Ansätze dar, die Schlüsselphrasen aus Dokumenten extrahieren (Keyphrase Extraction). Die Grenzen zwischen den automatischen Verfahren der Indexierung und denen des Text Summarization sind fließend." (S. 91). Am Beispiel NCR"s Extractor/Copernic Summarizer beschreibt Nohr die Funktionsweise.
  6. Clavel, G.; Walther, F.; Walther, J.: Indexation automatique de fonds bibliotheconomiques (1993) 0.02
    0.016869837 = product of:
      0.084349185 = sum of:
        0.084349185 = weight(_text_:91 in 6610) [ClassicSimilarity], result of:
          0.084349185 = score(doc=6610,freq=2.0), product of:
            0.19572705 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.03512561 = queryNorm
            0.43095312 = fieldWeight in 6610, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6610)
      0.2 = coord(1/5)
    
    Abstract
    A discussion of developments to date in the field of computerized indexing, based on presentations given at a seminar held at the Institute of Policy Studies in Paris in Nov 91. The methods tested so far, based on a linguistic approach, whether using natural language or special thesauri, encounter the same central problem - they are only successful when applied to collections of similar types of documents covering very specific subject areas. Despite this, the search for some sort of universal indexing metalanguage continues. In the end, computerized indexing works best when used in conjunction with manual indexing - ideally in the hands of a trained library science professional, who can extract the maximum value from a collection of documents for a particular user population
  7. Yusuff, A.: Automatisches Indexing and Abstracting : Grundlagen und Beispiele (2002) 0.01
    0.013640175 = product of:
      0.06820087 = sum of:
        0.06820087 = weight(_text_:b in 1577) [ClassicSimilarity], result of:
          0.06820087 = score(doc=1577,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.54802394 = fieldWeight in 1577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.109375 = fieldNorm(doc=1577)
      0.2 = coord(1/5)
    
    Imprint
    Potsdam : Fachhochschule, FB A-B-D
  8. Martins, A.L.; Souza, R.R.; Ribeiro de Mello, H.: ¬The use of noun phrases in information retrieval : proposing a mechanism for automatic classification (2014) 0.01
    0.01292654 = product of:
      0.03231635 = sum of:
        0.027557315 = weight(_text_:b in 1441) [ClassicSimilarity], result of:
          0.027557315 = score(doc=1441,freq=4.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.22143513 = fieldWeight in 1441, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03125 = fieldNorm(doc=1441)
        0.004759035 = product of:
          0.01903614 = sum of:
            0.01903614 = weight(_text_:22 in 1441) [ClassicSimilarity], result of:
              0.01903614 = score(doc=1441,freq=2.0), product of:
                0.1230039 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03512561 = queryNorm
                0.15476047 = fieldWeight in 1441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1441)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents a research on syntactic structures known as noun phrases (NP) being applied to increase the effectiveness and efficiency of the mechanisms for the document's classification. Our hypothesis is the fact that the NP can be used instead of single words as a semantic aggregator to reduce the number of words that will be used for the classification system without losing its semantic coverage, increasing its efficiency. The experiment divided the documents classification process in three phases: a) NP preprocessing b) system training; and c) classification experiments. In the first step, a corpus of digitalized texts was submitted to a natural language processing platform1 in which the part-of-speech tagging was done, and them PERL scripts pertaining to the PALAVRAS package were used to extract the Noun Phrases. The preprocessing also involved the tasks of a) removing NP low meaning pre-modifiers, as quantifiers; b) identification of synonyms and corresponding substitution for common hyperonyms; and c) stemming of the relevant words contained in the NP, for similitude checking with other NPs. The first tests with the resulting documents have demonstrated its effectiveness. We have compared the structural similarity of the documents before and after the whole pre-processing steps of phase one. The texts maintained the consistency with the original and have kept the readability. The second phase involves submitting the modified documents to a SVM algorithm to identify clusters and classify the documents. The classification rules are to be established using a machine learning approach. Finally, tests will be conducted to check the effectiveness of the whole process.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  9. Thirion, B.; Leroy, J.P.; Baudic, F.; Douyère, M.; Piot, J.; Darmoni, S.J.: SDI selecting, decribing, and indexing : did you mean automatically? (2001) 0.01
    0.011691578 = product of:
      0.05845789 = sum of:
        0.05845789 = weight(_text_:b in 6198) [ClassicSimilarity], result of:
          0.05845789 = score(doc=6198,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.46973482 = fieldWeight in 6198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.09375 = fieldNorm(doc=6198)
      0.2 = coord(1/5)
    
  10. Wiesenmüller, H.: DNB-Sacherschließung : Neues für die Reihen A und B (2019) 0.01
    0.010125204 = product of:
      0.05062602 = sum of:
        0.05062602 = weight(_text_:b in 5212) [ClassicSimilarity], result of:
          0.05062602 = score(doc=5212,freq=6.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.40680233 = fieldWeight in 5212, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=5212)
      0.2 = coord(1/5)
    
    Abstract
    "Alle paar Jahre wird die Bibliothekscommunity mit Veränderungen in der inhaltlichen Erschließung durch die Deutsche Nationalbibliothek konfrontiert. Sicher werden sich viele noch an die Einschnitte des Jahres 2014 für die Reihe A erinnern: Seither werden u.a. Ratgeber, Sprachwörterbücher, Reiseführer und Kochbücher nicht mehr mit Schlagwörtern erschlossen (vgl. das DNB-Konzept von 2014). Das Jahr 2017 brachte die Einführung der maschinellen Indexierung für die Reihen B und H bei gleichzeitigem Verlust der DDC-Tiefenerschließung (vgl. DNB-Informationen von 2017). Virulent war seither die Frage, was mit der Reihe A passieren würde. Seit wenigen Tagen kann man dies nun auf der Website der DNB nachlesen. (Nebenbei: Es ist zu befürchten, dass viele Links in diesem Blog-Beitrag in absehbarer Zeit nicht mehr funktionieren werden, da ein Relaunch der DNB-Website angekündigt ist. Wie beim letzten Mal wird es vermutlich auch diesmal keine Weiterleitungen von den alten auf die neuen URLs geben.)"
    Source
    https://www.basiswissen-rda.de/dnb-sacherschliessung-reihen-a-und-b/
  11. Thönssen, B.: Automatische Indexierung und Schnittstellen zu Thesauri (1988) 0.01
    0.009742982 = product of:
      0.04871491 = sum of:
        0.04871491 = weight(_text_:b in 30) [ClassicSimilarity], result of:
          0.04871491 = score(doc=30,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.3914457 = fieldWeight in 30, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=30)
      0.2 = coord(1/5)
    
  12. Biebricher, P.; Fuhr, N.; Niewelt, B.: ¬Der AIR-Retrievaltest (1986) 0.01
    0.009742982 = product of:
      0.04871491 = sum of:
        0.04871491 = weight(_text_:b in 4040) [ClassicSimilarity], result of:
          0.04871491 = score(doc=4040,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.3914457 = fieldWeight in 4040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=4040)
      0.2 = coord(1/5)
    
  13. Greiner-Petter, A.; Schubotz, M.; Cohl, H.S.; Gipp, B.: Semantic preserving bijective mappings for expressions involving special functions between computer algebra systems and document preparation systems (2019) 0.01
    0.009697999 = product of:
      0.024244998 = sum of:
        0.019485964 = weight(_text_:b in 5499) [ClassicSimilarity], result of:
          0.019485964 = score(doc=5499,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.15657827 = fieldWeight in 5499, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03125 = fieldNorm(doc=5499)
        0.004759035 = product of:
          0.01903614 = sum of:
            0.01903614 = weight(_text_:22 in 5499) [ClassicSimilarity], result of:
              0.01903614 = score(doc=5499,freq=2.0), product of:
                0.1230039 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03512561 = queryNorm
                0.15476047 = fieldWeight in 5499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5499)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Date
    20. 1.2015 18:30:22
  14. Rasmussen, E.M.: Indexing and retrieval for the Web (2002) 0.01
    0.008434919 = product of:
      0.042174593 = sum of:
        0.042174593 = weight(_text_:91 in 4285) [ClassicSimilarity], result of:
          0.042174593 = score(doc=4285,freq=2.0), product of:
            0.19572705 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.03512561 = queryNorm
            0.21547656 = fieldWeight in 4285, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4285)
      0.2 = coord(1/5)
    
    Source
    Annual review of information science and technology. 37(2003), S.91-126
  15. SIGIR'92 : Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1992) 0.01
    0.0076250895 = product of:
      0.038125448 = sum of:
        0.038125448 = weight(_text_:b in 6671) [ClassicSimilarity], result of:
          0.038125448 = score(doc=6671,freq=10.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.30635473 = fieldWeight in 6671, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6671)
      0.2 = coord(1/5)
    
    Content
    HARMAN, D.: Relevance feedback revisited; AALBERSBERG, I.J.: Incremental relevance feedback; TAGUE-SUTCLIFFE, J.: Measuring the informativeness of a retrieval process; LEWIS, D.D.: An evaluation of phrasal and clustered representations on a text categorization task; BLOSSEVILLE, M.J., G. HÉBRAIL, M.G. MONTEIL u. N. PÉNOT: Automatic document classification: natural language processing, statistical analysis, and expert system techniques used together; MASAND, B., G. LINOFF u. D. WALTZ: Classifying news stories using memory based reasoning; KEEN, E.M.: Term position ranking: some new test results; CROUCH, C.J. u. B. YANG: Experiments in automatic statistical thesaurus construction; GREFENSTETTE, G.: Use of syntactic context to produce term association lists for text retrieval; ANICK, P.G. u. R.A. FLYNN: Versioning of full-text information retrieval system; BURKOWSKI, F.J.: Retrieval activities in a database consisting of heterogeneous collections; DEERWESTER, S.C., K. WACLENA u. M. LaMAR: A textual object management system; NIE, J.-Y.:Towards a probabilistic modal logic for semantic-based information retrieval; WANG, A.W., S.K.M. WONG u. Y.Y. YAO: An analysis of vector space models based on computational geometry; BARTELL, B.T., G.W. COTTRELL u. R.K. BELEW: Latent semantic indexing is an optimal special case of multidimensional scaling; GLAVITSCH, U. u. P. SCHÄUBLE: A system for retrieving speech documents; MARGULIS, E.L.: N-Poisson document modelling; HESS, M.: An incrementally extensible document retrieval system based on linguistics and logical principles; COOPER, W.S., F.C. GEY u. D.P. DABNEY: Probabilistic retrieval based on staged logistic regression; FUHR, N.: Integration of probabilistic fact and text retrieval; CROFT, B., L.A. SMITH u. H. TURTLE: A loosely-coupled integration of a text retrieval system and an object-oriented database system; DUMAIS, S.T. u. J. NIELSEN: Automating the assignement of submitted manuscripts to reviewers; GOST, M.A. u. M. MASOTTI: Design of an OPAC database to permit different subject searching accesses; ROBERTSON, A.M. u. P. WILLETT: Searching for historical word forms in a database of 17th century English text using spelling correction methods; FAX, E.A., Q.F. CHEN u. L.S. HEATH: A faster algorithm for constructing minimal perfect hash functions; MOFFAT, A. u. J. ZOBEL: Parameterised compression for sparse bitmaps; GRANDI, F., P. TIBERIO u. P. Zezula: Frame-sliced patitioned parallel signature files; ALLEN, B.: Cognitive differences in end user searching of a CD-ROM index; SONNENWALD, D.H.: Developing a theory to guide the process of designing information retrieval systems; CUTTING, D.R., J.O. PEDERSEN, D. KARGER, u. J.W. TUKEY: Scatter/ Gather: a cluster-based approach to browsing large document collections; CHALMERS, M. u. P. CHITSON: Bead: Explorations in information visualization; WILLIAMSON, C. u. B. SHNEIDERMAN: The dynamic HomeFinder: evaluating dynamic queries in a real-estate information exploring system
  16. Krutulis, J.D.; Jacob, E.K.: ¬A theoretical model for the study of emergent structure in adaptive information networks (1995) 0.01
    0.0068200873 = product of:
      0.034100436 = sum of:
        0.034100436 = weight(_text_:b in 3353) [ClassicSimilarity], result of:
          0.034100436 = score(doc=3353,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.27401197 = fieldWeight in 3353, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3353)
      0.2 = coord(1/5)
    
    Source
    Connectedness: information, systems, people, organizations. Proceedings of CAIS/ACSI 95, the proceedings of the 23rd Annual Conference of the Canadian Association for Information Science. Ed. by Hope A. Olson and Denis B. Ward
  17. Siebenkäs, A.; Markscheffel, B.: Conception of a workflow for the semi-automatic construction of a thesaurus for the German printing industry (2015) 0.01
    0.0068200873 = product of:
      0.034100436 = sum of:
        0.034100436 = weight(_text_:b in 2091) [ClassicSimilarity], result of:
          0.034100436 = score(doc=2091,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.27401197 = fieldWeight in 2091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2091)
      0.2 = coord(1/5)
    
  18. Wiesenmüller, H.: Maschinelle Indexierung am Beispiel der DNB : Analyse und Entwicklungmöglichkeiten (2018) 0.01
    0.0068200873 = product of:
      0.034100436 = sum of:
        0.034100436 = weight(_text_:b in 5209) [ClassicSimilarity], result of:
          0.034100436 = score(doc=5209,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.27401197 = fieldWeight in 5209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5209)
      0.2 = coord(1/5)
    
    Abstract
    Der Beitrag untersucht die Ergebnisse des bei der Deutschen Nationalbibliothek (DNB) eingesetzten Verfahrens zur automatischen Vergabe von Schlagwörtern. Seit 2017 kommt dieses auch bei Printausgaben der Reihen B und H der Deutschen Nationalbibliografie zum Einsatz. Die zentralen Problembereiche werden dargestellt und an Beispielen illustriert - beispielsweise dass nicht alle im Inhaltsverzeichnis vorkommenden Wörter tatsächlich thematische Aspekte ausdrücken und dass die Software sehr häufig Körperschaften und andere "Named entities" nicht erkennt. Die maschinell generierten Ergebnisse sind derzeit sehr unbefriedigend. Es werden Überlegungen für mögliche Verbesserungen und sinnvolle Strategien angestellt.
  19. Experimentelles und praktisches Information Retrieval : Festschrift für Gerhard Lustig (1992) 0.01
    0.005845789 = product of:
      0.029228944 = sum of:
        0.029228944 = weight(_text_:b in 4) [ClassicSimilarity], result of:
          0.029228944 = score(doc=4,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.23486741 = fieldWeight in 4, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=4)
      0.2 = coord(1/5)
    
    Content
    Enthält die Beiträge: SALTON, G.: Effective text understanding in information retrieval; KRAUSE, J.: Intelligentes Information retrieval; FUHR, N.: Konzepte zur Gestaltung zukünftiger Information-Retrieval-Systeme; HÜTHER, H.: Überlegungen zu einem mathematischen Modell für die Type-Token-, die Grundform-Token und die Grundform-Type-Relation; KNORZ, G.: Automatische Generierung inferentieller Links in und zwischen Hyperdokumenten; KONRAD, E.: Zur Effektivitätsbewertung von Information-Retrieval-Systemen; HENRICHS, N.: Retrievalunterstützung durch automatisch generierte Wortfelder; LÜCK, W., W. RITTBERGER u. M. SCHWANTNER: Der Einsatz des Automatischen Indexierungs- und Retrieval-System (AIR) im Fachinformationszentrum Karlsruhe; REIMER, U.: Verfahren der Automatischen Indexierung. Benötigtes Vorwissen und Ansätze zu seiner automatischen Akquisition: Ein Überblick; ENDRES-NIGGEMEYER, B.: Dokumentrepräsentation: Ein individuelles prozedurales Modell des Abstracting, des Indexierens und Klassifizierens; SEELBACH, D.: Zur Entwicklung von zwei- und mehrsprachigen lexikalischen Datenbanken und Terminologiedatenbanken; ZIMMERMANN, H.: Der Einfluß der Sprachbarrieren in Europa und Möglichkeiten zu ihrer Minderung; LENDERS, W.: Wörter zwischen Welt und Wissen; PANYR, J.: Frames, Thesauri und automatische Klassifikation (Clusteranalyse): HAHN, U.: Forschungsstrategien und Erkenntnisinteressen in der anwendungsorientierten automatischen Sprachverarbeitung. Überlegungen zu einer ingenieurorientierten Computerlinguistik; KUHLEN, R.: Hypertext und Information Retrieval - mehr als Browsing und Suche.
  20. Cui, H.; Boufford, D.; Selden, P.: Semantic annotation of biosystematics literature without training examples (2010) 0.01
    0.005845789 = product of:
      0.029228944 = sum of:
        0.029228944 = weight(_text_:b in 3422) [ClassicSimilarity], result of:
          0.029228944 = score(doc=3422,freq=2.0), product of:
            0.1244487 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03512561 = queryNorm
            0.23486741 = fieldWeight in 3422, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=3422)
      0.2 = coord(1/5)
    
    Abstract
    This article presents an unsupervised algorithm for semantic annotation of morphological descriptions of whole organisms. The algorithm is able to annotate plain text descriptions with high accuracy at the clause level by exploiting the corpus itself. In other words, the algorithm does not need lexicons, syntactic parsers, training examples, or annotation templates. The evaluation on two real-life description collections in botany and paleontology shows that the algorithm has the following desirable features: (a) reduces/eliminates manual labor required to compile dictionaries and prepare source documents; (b) improves annotation coverage: the algorithm annotates what appears in documents and is not limited by predefined and often incomplete templates; (c) learns clean and reusable concepts: the algorithm learns organ names and character states that can be used to construct reusable domain lexicons, as opposed to collection-dependent patterns whose applicability is often limited to a particular collection; (d) insensitive to collection size; and (e) runs in linear time with respect to the number of clauses to be annotated.

Years

Languages

  • e 26
  • d 25
  • f 1
  • m 1
  • ru 1
  • More… Less…

Types

  • a 44
  • el 7
  • s 4
  • x 4
  • m 2
  • p 1
  • More… Less…