Search (47 results, page 1 of 3)

  • × theme_ss:"Automatisches Indexieren"
  1. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.06
    0.05708161 = product of:
      0.085622415 = sum of:
        0.061794292 = weight(_text_:social in 5001) [ClassicSimilarity], result of:
          0.061794292 = score(doc=5001,freq=2.0), product of:
            0.20037155 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.050248925 = queryNorm
            0.30839854 = fieldWeight in 5001, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5001)
        0.023828125 = product of:
          0.04765625 = sum of:
            0.04765625 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.04765625 = score(doc=5001,freq=2.0), product of:
                0.17596318 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050248925 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  2. Moreno, J.M.T.: Automatic text summarization (2014) 0.05
    0.0501267 = product of:
      0.07519005 = sum of:
        0.04413878 = weight(_text_:social in 1518) [ClassicSimilarity], result of:
          0.04413878 = score(doc=1518,freq=2.0), product of:
            0.20037155 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.050248925 = queryNorm
            0.22028469 = fieldWeight in 1518, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1518)
        0.03105127 = product of:
          0.06210254 = sum of:
            0.06210254 = weight(_text_:networks in 1518) [ClassicSimilarity], result of:
              0.06210254 = score(doc=1518,freq=2.0), product of:
                0.23767339 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.050248925 = queryNorm
                0.26129362 = fieldWeight in 1518, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1518)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts. The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been developed in recent years that include several applications of ADS. The development of recent technology has impacted on the development of algorithms and their applications. The massive use of social networks and the new forms of the technology requires the adaptation of the classical methods of text summarizers. This is a new textbook on Automatic Text Summarization, based on teaching materials used in two or one-semester courses. It presents a extensive state-of-art and describes the new systems on the subject. Previous automatic summarization books have been either collections of specialized papers, or else authored books with only a chapter or two devoted to the field as a whole. In other hand, the classic books on the subject are not recent.
  3. Mesquita, L.A.P.; Souza, R.R.; Baracho Porto, R.M.A.: Noun phrases in automatic indexing: : a structural analysis of the distribution of relevant terms in doctoral theses (2014) 0.05
    0.049851045 = product of:
      0.07477657 = sum of:
        0.061160497 = weight(_text_:social in 1442) [ClassicSimilarity], result of:
          0.061160497 = score(doc=1442,freq=6.0), product of:
            0.20037155 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.050248925 = queryNorm
            0.30523545 = fieldWeight in 1442, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.03125 = fieldNorm(doc=1442)
        0.013616072 = product of:
          0.027232144 = sum of:
            0.027232144 = weight(_text_:22 in 1442) [ClassicSimilarity], result of:
              0.027232144 = score(doc=1442,freq=2.0), product of:
                0.17596318 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050248925 = queryNorm
                0.15476047 = fieldWeight in 1442, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1442)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The main objective of this research was to analyze whether there was a characteristic distribution behavior of relevant terms over a scientific text that could contribute as a criterion for their process of automatic indexing. The terms considered in this study were only full noun phrases contained in the texts themselves. The texts were considered a total of 98 doctoral theses of the eight areas of knowledge in a same university. Initially, 20 full noun phrases were automatically extracted from each text as candidates to be the most relevant terms, and each author of each text assigned a relevance value 0-6 (not relevant and highly relevant, respectively) for each of the 20 noun phrases sent. Only, 22.1 % of noun phrases were considered not relevant. A relevance values of the terms assigned by the authors were associated with their positions in the text. Each full noun phrases found in the text was considered as a valid linear position. The results that were obtained showed values resulting from this distribution by considering two types of position: linear, with values consolidated into ten equal consecutive parts; and structural, considering parts of the text (such as introduction, development and conclusion). As a result of considerable importance, all areas of knowledge related to the Natural Sciences showed a characteristic behavior in the distribution of relevant terms, as well as all areas of knowledge related to Social Sciences showed the same characteristic behavior of distribution, but distinct from the Natural Sciences. The difference of the distribution behavior between the Natural and Social Sciences can be clearly visualized through graphs. All behaviors, including the general behavior of all areas of knowledge together, were characterized in polynomial equations and can be applied in future as criteria for automatic indexing. Until the present date this work has become inedited of for two reasons: to present a method for characterizing the distribution of relevant terms in a scientific text, and also, through this method, pointing out a quantitative trait difference between the Natural and Social Sciences.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  4. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.04
    0.044866603 = product of:
      0.1345998 = sum of:
        0.1345998 = sum of:
          0.08694356 = weight(_text_:networks in 2673) [ClassicSimilarity], result of:
            0.08694356 = score(doc=2673,freq=2.0), product of:
              0.23767339 = queryWeight, product of:
                4.72992 = idf(docFreq=1060, maxDocs=44218)
                0.050248925 = queryNorm
              0.36581108 = fieldWeight in 2673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.72992 = idf(docFreq=1060, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2673)
          0.04765625 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
            0.04765625 = score(doc=2673,freq=2.0), product of:
              0.17596318 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050248925 = queryNorm
              0.2708308 = fieldWeight in 2673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2673)
      0.33333334 = coord(1/3)
    
    Date
    1. 8.1996 22:08:06
    Source
    Computer networks and ISDN systems. 29(1997) no.8, S.1147-1156
  5. Golub, K.; Lykke, M.; Tudhope, D.: Enhancing social tagging with automated keywords from the Dewey Decimal Classification (2014) 0.04
    0.041614447 = product of:
      0.12484334 = sum of:
        0.12484334 = weight(_text_:social in 2918) [ClassicSimilarity], result of:
          0.12484334 = score(doc=2918,freq=16.0), product of:
            0.20037155 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.050248925 = queryNorm
            0.6230592 = fieldWeight in 2918, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2918)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to explore the potential of applying the Dewey Decimal Classification (DDC) as an established knowledge organization system (KOS) for enhancing social tagging, with the ultimate purpose of improving subject indexing and information retrieval. Design/methodology/approach - Over 11.000 Intute metadata records in politics were used. Totally, 28 politics students were each given four tasks, in which a total of 60 resources were tagged in two different configurations, one with uncontrolled social tags only and another with uncontrolled social tags as well as suggestions from a controlled vocabulary. The controlled vocabulary was DDC comprising also mappings from the Library of Congress Subject Headings. Findings - The results demonstrate the importance of controlled vocabulary suggestions for indexing and retrieval: to help produce ideas of which tags to use, to make it easier to find focus for the tagging, to ensure consistency and to increase the number of access points in retrieval. The value and usefulness of the suggestions proved to be dependent on the quality of the suggestions, both as to conceptual relevance to the user and as to appropriateness of the terminology. Originality/value - No research has investigated the enhancement of social tagging with suggestions from the DDC, an established KOS, in a user trial, comparing social tagging only and social tagging enhanced with the suggestions. This paper is a final reflection on all aspects of the study.
    Theme
    Social tagging
  6. Garfield, E.: KeyWords Plus takes you beyond title words (1990) 0.04
    0.041196197 = product of:
      0.123588584 = sum of:
        0.123588584 = weight(_text_:social in 4344) [ClassicSimilarity], result of:
          0.123588584 = score(doc=4344,freq=2.0), product of:
            0.20037155 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.050248925 = queryNorm
            0.6167971 = fieldWeight in 4344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.109375 = fieldNorm(doc=4344)
      0.33333334 = coord(1/3)
    
    Issue
    Pt.2: Expanded journal coverage for Current Contents on Diskette, includes social and behavioral sciences
  7. Mao, J.; Xu, W.; Yang, Y.; Wang, J.; Yuille, A.L.: Explain images with multimodal recurrent neural networks (2014) 0.02
    0.021512954 = product of:
      0.06453886 = sum of:
        0.06453886 = product of:
          0.12907772 = sum of:
            0.12907772 = weight(_text_:networks in 1557) [ClassicSimilarity], result of:
              0.12907772 = score(doc=1557,freq=6.0), product of:
                0.23767339 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.050248925 = queryNorm
                0.5430886 = fieldWeight in 1557, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1557)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12 [8], Flickr 8K [28], and Flickr 30K [13]). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.
  8. Krutulis, J.D.; Jacob, E.K.: ¬A theoretical model for the study of emergent structure in adaptive information networks (1995) 0.02
    0.020492794 = product of:
      0.06147838 = sum of:
        0.06147838 = product of:
          0.12295676 = sum of:
            0.12295676 = weight(_text_:networks in 3353) [ClassicSimilarity], result of:
              0.12295676 = score(doc=3353,freq=4.0), product of:
                0.23767339 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.050248925 = queryNorm
                0.517335 = fieldWeight in 3353, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3353)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Attempts to automate classification have focused on mimicking the intellectual processes whereby human classifiers assign entities to mutually exclusive groups that exhibit or more shared characteristics. A more viable approach might be to construct an adaptive retrieval system that produces groupings of related entities by generating dynamic categories based on document content and on the system's emergent structure as it adapts to modifications in the database and to observed patterns of access. Presents a theoretical model for adaptive information networks using relevance feedback and genetic algorithms to generate emergent structure
  9. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.02
    0.018154763 = product of:
      0.054464288 = sum of:
        0.054464288 = product of:
          0.108928576 = sum of:
            0.108928576 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.108928576 = score(doc=402,freq=2.0), product of:
                0.17596318 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050248925 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  10. Ma, N.; Zheng, H.T.; Xiao, X.: ¬An ontology-based latent semantic indexing approach using long short-term memory networks (2017) 0.02
    0.01792746 = product of:
      0.05378238 = sum of:
        0.05378238 = product of:
          0.10756476 = sum of:
            0.10756476 = weight(_text_:networks in 3810) [ClassicSimilarity], result of:
              0.10756476 = score(doc=3810,freq=6.0), product of:
                0.23767339 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.050248925 = queryNorm
                0.45257387 = fieldWeight in 3810, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3810)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Nowadays, online data shows an astonishing increase and the issue of semantic indexing remains an open question. Ontologies and knowledge bases have been widely used to optimize performance. However, researchers are placing increased emphasis on internal relations of ontologies but neglect latent semantic relations between ontologies and documents. They generally annotate instances mentioned in documents, which are related to concepts in ontologies. In this paper, we propose an Ontology-based Latent Semantic Indexing approach utilizing Long Short-Term Memory networks (LSTM-OLSI). We utilize an importance-aware topic model to extract document-level semantic features and leverage ontologies to extract word-level contextual features. Then we encode the above two levels of features and match their embedding vectors utilizing LSTM networks. Finally, the experimental results reveal that LSTM-OLSI outperforms existing techniques and demonstrates deep comprehension of instances and articles.
  11. Karpathy, A.; Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions (2015) 0.02
    0.017565252 = product of:
      0.052695755 = sum of:
        0.052695755 = product of:
          0.10539151 = sum of:
            0.10539151 = weight(_text_:networks in 1868) [ClassicSimilarity], result of:
              0.10539151 = score(doc=1868,freq=4.0), product of:
                0.23767339 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.050248925 = queryNorm
                0.44343 = fieldWeight in 1868, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1868)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate the effectiveness of our alignment model with ranking experiments on Flickr8K, Flickr30K and COCO datasets, where we substantially improve on the state of the art. We then show that the sentences created by our generative model outperform retrieval baselines on the three aforementioned datasets and a new dataset of region-level annotations.
  12. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.015885416 = product of:
      0.04765625 = sum of:
        0.04765625 = product of:
          0.0953125 = sum of:
            0.0953125 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.0953125 = score(doc=262,freq=2.0), product of:
                0.17596318 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050248925 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  13. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.02
    0.015885416 = product of:
      0.04765625 = sum of:
        0.04765625 = product of:
          0.0953125 = sum of:
            0.0953125 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
              0.0953125 = score(doc=6265,freq=2.0), product of:
                0.17596318 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050248925 = queryNorm
                0.5416616 = fieldWeight in 6265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6265)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23
  14. Vilares, D.; Alonso, M.A.; Gómez-Rodríguez, C.: On the usefulness of lexical and syntactic processing in polarity classification of Twitter messages (2015) 0.01
    0.014712928 = product of:
      0.04413878 = sum of:
        0.04413878 = weight(_text_:social in 2161) [ClassicSimilarity], result of:
          0.04413878 = score(doc=2161,freq=2.0), product of:
            0.20037155 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.050248925 = queryNorm
            0.22028469 = fieldWeight in 2161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2161)
      0.33333334 = coord(1/3)
    
    Abstract
    Millions of micro texts are published every day on Twitter. Identifying the sentiment present in them can be helpful for measuring the frame of mind of the public, their satisfaction with respect to a product, or their support of a social event. In this context, polarity classification is a subfield of sentiment analysis focused on determining whether the content of a text is objective or subjective, and in the latter case, if it conveys a positive or a negative opinion. Most polarity detection techniques tend to take into account individual terms in the text and even some degree of linguistic knowledge, but they do not usually consider syntactic relations between words. This article explores how relating lexical, syntactic, and psychometric information can be helpful to perform polarity classification on Spanish tweets. We provide an evaluation for both shallow and deep linguistic perspectives. Empirical results show an improved performance of syntactic approaches over pure lexical models when using large training sets to create a classifier, but this tendency is reversed when small training collections are used.
  15. Smiraglia, R.P.; Cai, X.: Tracking the evolution of clustering, machine learning, automatic indexing and automatic classification in knowledge organization (2017) 0.01
    0.014712928 = product of:
      0.04413878 = sum of:
        0.04413878 = weight(_text_:social in 3627) [ClassicSimilarity], result of:
          0.04413878 = score(doc=3627,freq=2.0), product of:
            0.20037155 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.050248925 = queryNorm
            0.22028469 = fieldWeight in 3627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3627)
      0.33333334 = coord(1/3)
    
    Abstract
    A very important extension of the traditional domain of knowledge organization (KO) arises from attempts to incorporate techniques devised in the computer science domain for automatic concept extraction and for grouping, categorizing, clustering and otherwise organizing knowledge using mechanical means. Four specific terms have emerged to identify the most prevalent techniques: machine learning, clustering, automatic indexing, and automatic classification. Our study presents three domain analytical case analyses in search of answers. The first case relies on citations located using the ISKO-supported "Knowledge Organization Bibliography." The second case relies on works in both Web of Science and SCOPUS. Case three applies co-word analysis and citation analysis to the contents of the papers in the present special issue. We observe scholars involved in "clustering" and "automatic classification" who share common thematic emphases. But we have found no coherence, no common activity and no social semantics. We have not found a research front, or a common teleology within the KO domain. We also have found a lively group of authors who have succeeded in submitting papers to this special issue, and their work quite interestingly aligns with the case studies we report. There is an emphasis on KO for information retrieval; there is much work on clustering (which involves conceptual points within texts) and automatic classification (which involves semantic groupings at the meta-document level).
  16. Giesselbach, S.; Estler-Ziegler, T.: Dokumente schneller analysieren mit Künstlicher Intelligenz (2021) 0.01
    0.014712928 = product of:
      0.04413878 = sum of:
        0.04413878 = weight(_text_:social in 128) [ClassicSimilarity], result of:
          0.04413878 = score(doc=128,freq=2.0), product of:
            0.20037155 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.050248925 = queryNorm
            0.22028469 = fieldWeight in 128, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=128)
      0.33333334 = coord(1/3)
    
    Abstract
    Künstliche Intelligenz (KI) und natürliches Sprachverstehen (natural language understanding/NLU) verändern viele Aspekte unseres Alltags und unserer Arbeitsweise. Besondere Prominenz erlangte NLU durch Sprachassistenten wie Siri, Alexa und Google Now. NLU bietet Firmen und Einrichtungen das Potential, Prozesse effizienter zu gestalten und Mehrwert aus textuellen Inhalten zu schöpfen. So sind NLU-Lösungen in der Lage, komplexe, unstrukturierte Dokumente inhaltlich zu erschließen. Für die semantische Textanalyse hat das NLU-Team des IAIS Sprachmodelle entwickelt, die mit Deep-Learning-Verfahren trainiert werden. Die NLU-Suite analysiert Dokumente, extrahiert Eckdaten und erstellt bei Bedarf sogar eine strukturierte Zusammenfassung. Mit diesen Ergebnissen, aber auch über den Inhalt der Dokumente selbst, lassen sich Dokumente vergleichen oder Texte mit ähnlichen Informationen finden. KI-basierten Sprachmodelle sind der klassischen Verschlagwortung deutlich überlegen. Denn sie finden nicht nur Texte mit vordefinierten Schlagwörtern, sondern suchen intelligent nach Begriffen, die in ähnlichem Zusammenhang auftauchen oder als Synonym gebraucht werden. Der Vortrag liefert eine Einordnung der Begriffe "Künstliche Intelligenz" und "Natural Language Understanding" und zeigt Möglichkeiten, Grenzen, aktuelle Forschungsrichtungen und Methoden auf. Anhand von Praxisbeispielen wird anschließend demonstriert, wie NLU zur automatisierten Belegverarbeitung, zur Katalogisierung von großen Datenbeständen wie Nachrichten und Patenten und zur automatisierten thematischen Gruppierung von Social Media Beiträgen und Publikationen genutzt werden kann.
  17. Donahue, J.; Hendricks, L.A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description (2014) 0.01
    0.014637709 = product of:
      0.043913126 = sum of:
        0.043913126 = product of:
          0.08782625 = sum of:
            0.08782625 = weight(_text_:networks in 1873) [ClassicSimilarity], result of:
              0.08782625 = score(doc=1873,freq=4.0), product of:
                0.23767339 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.050248925 = queryNorm
                0.369525 = fieldWeight in 1873, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1873)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
  18. Pfeifer, U.; Fuhr, N.; Huynh, T.: Searching structured documents with the enhanced retrieval functionality of freeWAIS-sf and SFgate (1995) 0.01
    0.014490593 = product of:
      0.04347178 = sum of:
        0.04347178 = product of:
          0.08694356 = sum of:
            0.08694356 = weight(_text_:networks in 2214) [ClassicSimilarity], result of:
              0.08694356 = score(doc=2214,freq=2.0), product of:
                0.23767339 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.050248925 = queryNorm
                0.36581108 = fieldWeight in 2214, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2214)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Computer networks and ISDN systems. 27(1995) no.6, S.1027-36
  19. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.01
    0.013616072 = product of:
      0.040848214 = sum of:
        0.040848214 = product of:
          0.08169643 = sum of:
            0.08169643 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.08169643 = score(doc=58,freq=2.0), product of:
                0.17596318 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050248925 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    14. 6.2015 22:12:44
  20. Hauer, M.: Automatische Indexierung (2000) 0.01
    0.013616072 = product of:
      0.040848214 = sum of:
        0.040848214 = product of:
          0.08169643 = sum of:
            0.08169643 = weight(_text_:22 in 5887) [ClassicSimilarity], result of:
              0.08169643 = score(doc=5887,freq=2.0), product of:
                0.17596318 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050248925 = queryNorm
                0.46428138 = fieldWeight in 5887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5887)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Wissen in Aktion: Wege des Knowledge Managements. 22. Online-Tagung der DGI, Frankfurt am Main, 2.-4.5.2000. Proceedings. Hrsg.: R. Schmidt

Years

Languages

  • e 28
  • d 17
  • m 1
  • ru 1
  • More… Less…

Types

  • a 41
  • el 6
  • m 2
  • x 2
  • More… Less…

Classifications