Search (58 results, page 1 of 3)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.24
    0.23780152 = product of:
      0.3170687 = sum of:
        0.07450074 = product of:
          0.22350222 = sum of:
            0.22350222 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.22350222 = score(doc=562,freq=2.0), product of:
                0.39767802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046906993 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.22350222 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.22350222 = score(doc=562,freq=2.0), product of:
            0.39767802 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046906993 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.019065749 = product of:
          0.038131498 = sum of:
            0.038131498 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.038131498 = score(doc=562,freq=2.0), product of:
                0.1642603 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046906993 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Automatische Klassifikation und Extraktion in Documentum (2005) 0.02
    0.024823686 = product of:
      0.049647372 = sum of:
        0.03492763 = weight(_text_:services in 3974) [ClassicSimilarity], result of:
          0.03492763 = score(doc=3974,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.2028165 = fieldWeight in 3974, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3974)
        0.014719742 = product of:
          0.029439485 = sum of:
            0.029439485 = weight(_text_:management in 3974) [ClassicSimilarity], result of:
              0.029439485 = score(doc=3974,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.18620178 = fieldWeight in 3974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3974)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    "LCI Comprend ist ab sofort als integriertes Modul für EMCs Content Management System Documentum verfügbar. LCI (Learning Computers International GmbH) hat mit Unterstützung von neeb & partner diese Technologie zur Dokumentenautomation transparent in Documentum integriert. Dies ist die erste bekannte Lösung für automatische, lernende Klassifikation und Extraktion, die direkt auf dem Documentum Datenbestand arbeitet und ohne zusätzliche externe Steuerung auskommt. Die LCI Information Capture Services (ICS) dienen dazu, jegliche Art von Dokument zu klassifizieren und Information daraus zu extrahieren. Das Dokument kann strukturiert, halbstrukturiert oder unstrukturiert sein. Somit können beispielsweise gescannte Formulare genauso verarbeitet werden wie Rechnungen oder E-Mails. Die Extraktions- und Klassifikationsvorschriften und die zu lernenden Beispieldokumente werden einfach interaktiv zusammengestellt und als XML-Struktur gespeichert. Zur Laufzeit wird das Projekt angewendet, um unbekannte Dokumente aufgrund von Regeln und gelernten Beispielen automatisch zu indexieren. Dokumente können damit entweder innerhalb von Documentum oder während des Imports verarbeitet werden. Der neue Server erlaubt das Einlesen von Dateien aus dem Dateisystem oder direkt von POPS-Konten, die Analyse der Dokumente und die automatische Erzeugung von Indexwerten bei der Speicherung in einer Documentum Ablageumgebung. Diese Indexwerte, die durch inhaltsbasierte, auch mehrthematische Klassifikation oder durch Extraktion gewonnen wurden, werden als vordefinierte Attribute mit dem Documentum-Objekt abgelegt. Handelt es sich um ein gescanntes Dokument oder ein Fax, wird automatisch die integrierte Volltext-Texterkennung durchgeführt."
  3. Koch, T.; Vizine-Goetz, D.: Automatic classification and content navigation support for Web services : DESIRE II cooperates with OCLC (1998) 0.02
    0.021173751 = product of:
      0.084695004 = sum of:
        0.084695004 = weight(_text_:services in 1568) [ClassicSimilarity], result of:
          0.084695004 = score(doc=1568,freq=6.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.4918039 = fieldWeight in 1568, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1568)
      0.25 = coord(1/4)
    
    Abstract
    Emerging standards in knowledge representation and organization are preparing the way for distributed vocabulary support in Internet search services. NetLab researchers are exploring several innovative solutions for searching and browsing in the subject-based Internet gateway, Electronic Engineering Library, Sweden (EELS). The implementation of the EELS service is described, specifically, the generation of the robot-gathered database 'All' engineering and the automated application of the Ei thesaurus and classification scheme. NetLab and OCLC researchers are collaborating to investigate advanced solutions to automated classification in the DESIRE II context. A plan for furthering the development of distributed vocabulary support in Internet search services is offered.
  4. Koch, T.; Ardö, A.; Brümmer, A.: ¬The building and maintenance of robot based internet search services : A review of current indexing and data collection methods. Prepared to meet the requirements of Work Package 3 of EU Telematics for Research, project DESIRE. Version D3.11v0.3 (Draft version 3) (1996) 0.02
    0.019758051 = product of:
      0.079032205 = sum of:
        0.079032205 = weight(_text_:services in 1669) [ClassicSimilarity], result of:
          0.079032205 = score(doc=1669,freq=16.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.45892134 = fieldWeight in 1669, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.03125 = fieldNorm(doc=1669)
      0.25 = coord(1/4)
    
    Abstract
    After a short outline of problems, possibilities and difficulties of systematic information retrieval on the Internet and a description of efforts for development in this area, a specification of the terminology for this report is required. Although the process of retrieval is generally seen as an iterative process of browsing and information retrieval and several important services on the net have taken this fact into consideration, the emphasis of this report lays on the general retrieval tools for the whole of Internet. In order to be able to evaluate the differences, possibilities and restrictions of the different services it is necessary to begin with organizing the existing varieties in a typological/ taxonomical survey. The possibilities and weaknesses will be briefly compared and described for the most important services in the categories robot-based WWW-catalogues of different types, list- or form-based catalogues and simultaneous or collected search services respectively. It will however for different reasons not be possible to rank them in order of "best" services. Still more important are the weaknesses and problems common for all attempts of indexing the Internet. The problems of the quality of the input, the technical performance and the general problem of indexing virtual hypertext are shown to be at least as difficult as the different aspects of harvesting, indexing and information retrieval. Some of the attempts made in the area of further development of retrieval services will be mentioned in relation to descriptions of the contents of documents and standardization efforts. Internet harvesting and indexing technology and retrieval software is thoroughly reviewed. Details about all services and software are listed in analytical forms in Annex 1-3.
  5. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.02
    0.01836472 = product of:
      0.07345888 = sum of:
        0.07345888 = sum of:
          0.035327382 = weight(_text_:management in 2760) [ClassicSimilarity], result of:
            0.035327382 = score(doc=2760,freq=2.0), product of:
              0.15810528 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.046906993 = queryNorm
              0.22344214 = fieldWeight in 2760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
          0.038131498 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
            0.038131498 = score(doc=2760,freq=2.0), product of:
              0.1642603 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046906993 = queryNorm
              0.23214069 = fieldWeight in 2760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
      0.25 = coord(1/4)
    
    Abstract
    Information is often organized as a text hierarchy. A hierarchical text-classification system is thus essential for the management, sharing, and dissemination of information. It aims to automatically classify each incoming document into zero, one, or several categories in the text hierarchy. In this paper, we present a technique called CRHTC (context recognition for hierarchical text classification) that performs hierarchical text classification by recognizing the context of discussion (COD) of each category. A category's COD is governed by its ancestor categories, whose contents indicate contextual backgrounds of the category. A document may be classified into a category only if its content matches the category's COD. CRHTC does not require any trials to manually set parameters, and hence is more portable and easier to implement than other methods. It is empirically evaluated under various conditions. The results show that CRHTC achieves both better and more stable performance than several hierarchical and nonhierarchical text-classification methodologies.
    Date
    22. 3.2009 19:11:54
  6. McKiernan, G.: Automated categorisation of Web resources : a profile of selected projects, research, products, and services (1996) 0.02
    0.017463814 = product of:
      0.06985526 = sum of:
        0.06985526 = weight(_text_:services in 2533) [ClassicSimilarity], result of:
          0.06985526 = score(doc=2533,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.405633 = fieldWeight in 2533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.078125 = fieldNorm(doc=2533)
      0.25 = coord(1/4)
    
  7. Mukhopadhyay, S.; Peng, S.; Raje, R.; Palakal, M.; Mostafa, J.: Multi-agent information classification using dynamic acquaintance lists (2003) 0.01
    0.014818538 = product of:
      0.059274152 = sum of:
        0.059274152 = weight(_text_:services in 1755) [ClassicSimilarity], result of:
          0.059274152 = score(doc=1755,freq=4.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.344191 = fieldWeight in 1755, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=1755)
      0.25 = coord(1/4)
    
    Abstract
    There has been considerable interest in recent years in providing automated information services, such as information classification, by means of a society of collaborative agents. These agents augment each other's knowledge structures (e.g., the vocabularies) and assist each other in providing efficient information services to a human user. However, when the number of agents present in the society increases, exhaustive communication and collaboration among agents result in a [arge communication overhead and increased delays in response time. This paper introduces a method to achieve selective interaction with a relatively small number of potentially useful agents, based an simple agent modeling and acquaintance lists. The key idea presented here is that the acquaintance list of an agent, representing a small number of other agents to be collaborated with, is dynamically adjusted. The best acquaintances are automatically discovered using a learning algorithm, based an the past history of collaboration. Experimental results are presented to demonstrate that such dynamically learned acquaintance lists can lead to high quality of classification, while significantly reducing the delay in response time.
  8. Koch, T.; Vizine-Goetz, D.: DDC and knowledge organization in the digital library : Research and development. Demonstration pages (1999) 0.01
    0.010478289 = product of:
      0.041913155 = sum of:
        0.041913155 = weight(_text_:services in 942) [ClassicSimilarity], result of:
          0.041913155 = score(doc=942,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.2433798 = fieldWeight in 942, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=942)
      0.25 = coord(1/4)
    
    Content
    1. Increased Importance of Knowledge Organization in Internet Services - 2. Quality Subject Service and the role of classification - 3. Developing the DDC into a knowledge organization instrument for the digital library. OCLC site - 4. DESIRE's Barefoot Solutions of Automatic Classification - 5. Advanced Classification Solutions in DESIRE and CORC - 6. Future directions of research and development - 7. General references
  9. Koch, T.; Ardö, A.; Noodén, L.: ¬The construction of a robot-generated subject index : DESIRE II D3.6a, Working Paper 1 (1999) 0.01
    0.010478289 = product of:
      0.041913155 = sum of:
        0.041913155 = weight(_text_:services in 1668) [ClassicSimilarity], result of:
          0.041913155 = score(doc=1668,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.2433798 = fieldWeight in 1668, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=1668)
      0.25 = coord(1/4)
    
    Abstract
    This working paper describes the creation of a test database to carry out the automatic classification tasks of the DESIRE II work package D3.6a on. It is an improved version of NetLab's existing "All" Engineering database created after a comparative study of the outcome of two different approaches to collecting the documents. These two methods were selected from seven different general methodologies to build robot-generated subject indices, presented in this paper. We found a surprisingly low overlap between the Engineering link collections we used as seed pages for the robot and subsequently an even more surprisingly low overlap between the resources collected by the two different approaches. That inspite of using basically the same services to start the harvesting process from. A intellectual evaluation of the contents of both databases showed almost exactly the same percentage of relevant documents (77%), indicating that the main difference between those aproaches was the coverage of the resulting database.
  10. Golub, K.: Automated subject classification of textual Web pages, based on a controlled vocabulary : challenges and recommendations (2006) 0.01
    0.010478289 = product of:
      0.041913155 = sum of:
        0.041913155 = weight(_text_:services in 5897) [ClassicSimilarity], result of:
          0.041913155 = score(doc=5897,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.2433798 = fieldWeight in 5897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=5897)
      0.25 = coord(1/4)
    
    Content
    Beitrag eines Themenheftes "Knowledge organization systems and services"
  11. Kwok, K.L.: ¬The use of titles and cited titles as document representations for automatic classification (1975) 0.01
    0.01030382 = product of:
      0.04121528 = sum of:
        0.04121528 = product of:
          0.08243056 = sum of:
            0.08243056 = weight(_text_:management in 4347) [ClassicSimilarity], result of:
              0.08243056 = score(doc=4347,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.521365 = fieldWeight in 4347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4347)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 11(1975), S.201-206
  12. Wu, M.; Fuller, M.; Wilkinson, R.: Using clustering and classification approaches in interactive retrieval (2001) 0.01
    0.01030382 = product of:
      0.04121528 = sum of:
        0.04121528 = product of:
          0.08243056 = sum of:
            0.08243056 = weight(_text_:management in 2666) [ClassicSimilarity], result of:
              0.08243056 = score(doc=2666,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.521365 = fieldWeight in 2666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2666)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 37(2001) no.3, S.459-484
  13. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.009532874 = product of:
      0.038131498 = sum of:
        0.038131498 = product of:
          0.076262996 = sum of:
            0.076262996 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.076262996 = score(doc=1046,freq=2.0), product of:
                0.1642603 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046906993 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 5.2003 14:17:22
  14. Major, R.L.; Ragsdale, C.T.: ¬An aggregation approach to the classification problem using multiple prediction experts (2000) 0.01
    0.008831846 = product of:
      0.035327382 = sum of:
        0.035327382 = product of:
          0.070654765 = sum of:
            0.070654765 = weight(_text_:management in 3789) [ClassicSimilarity], result of:
              0.070654765 = score(doc=3789,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.44688427 = fieldWeight in 3789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3789)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 36(2000) no.4, S.683-696
  15. Krellenstein, M.: Document classification at Northern Light (1999) 0.01
    0.008831846 = product of:
      0.035327382 = sum of:
        0.035327382 = product of:
          0.070654765 = sum of:
            0.070654765 = weight(_text_:management in 4435) [ClassicSimilarity], result of:
              0.070654765 = score(doc=4435,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.44688427 = fieldWeight in 4435, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4435)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Vortrag bei: Search engines and beyond: developing efficient knowledge management systems; 1999 Search engine Meeting, Boston, MA, April 19-20 1999
  16. AlQenaei, Z.M.; Monarchi, D.E.: ¬The use of learning techniques to analyze the results of a manual classification system (2016) 0.01
    0.008731907 = product of:
      0.03492763 = sum of:
        0.03492763 = weight(_text_:services in 2836) [ClassicSimilarity], result of:
          0.03492763 = score(doc=2836,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.2028165 = fieldWeight in 2836, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2836)
      0.25 = coord(1/4)
    
    Abstract
    Classification is the process of assigning objects to pre-defined classes based on observations or characteristics of those objects, and there are many approaches to performing this task. The overall objective of this study is to demonstrate the use of two learning techniques to analyze the results of a manual classification system. Our sample consisted of 1,026 documents, from the ACM Computing Classification System, classified by their authors as belonging to one of the groups of the classification system: "H.3 Information Storage and Retrieval." A singular value decomposition of the documents' weighted term-frequency matrix was used to represent each document in a 50-dimensional vector space. The analysis of the representation using both supervised (decision tree) and unsupervised (clustering) techniques suggests that two pairs of the ACM classes are closely related to each other in the vector space. Class 1 (Content Analysis and Indexing) is closely related to Class 3 (Information Search and Retrieval), and Class 4 (Systems and Software) is closely related to Class 5 (Online Information Services). Further analysis was performed to test the diffusion of the words in the two classes using both cosine and Euclidean distance.
  17. Savic, D.: Designing an expert system for classifying office documents (1994) 0.01
    0.008326744 = product of:
      0.033306975 = sum of:
        0.033306975 = product of:
          0.06661395 = sum of:
            0.06661395 = weight(_text_:management in 2655) [ClassicSimilarity], result of:
              0.06661395 = score(doc=2655,freq=4.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.42132655 = fieldWeight in 2655, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2655)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Can records management benefit from artificial intelligence technology, in particular from expert systems? Gives an answer to this question by showing an example of a small scale prototype project in automatic classification of office documents. Project methodology and basic elements of an expert system's approach are elaborated to give guidelines to potential users of this promising technology
    Source
    Records management quarterly. 28(1994) no.3, S.20-29
  18. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.007944062 = product of:
      0.03177625 = sum of:
        0.03177625 = product of:
          0.0635525 = sum of:
            0.0635525 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.0635525 = score(doc=611,freq=2.0), product of:
                0.1642603 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046906993 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 8.2009 12:54:24
  19. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.007944062 = product of:
      0.03177625 = sum of:
        0.03177625 = product of:
          0.0635525 = sum of:
            0.0635525 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.0635525 = score(doc=2748,freq=2.0), product of:
                0.1642603 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046906993 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22
  20. Guerrero-Bote, V.P.; Moya Anegón, F. de; Herrero Solana, V.: Document organization using Kohonen's algorithm (2002) 0.01
    0.005887897 = product of:
      0.023551589 = sum of:
        0.023551589 = product of:
          0.047103178 = sum of:
            0.047103178 = weight(_text_:management in 2564) [ClassicSimilarity], result of:
              0.047103178 = score(doc=2564,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.29792285 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 38(2002) no.1, S.79-89

Years

Languages

  • e 52
  • d 5

Types

  • a 49
  • el 9
  • m 1
  • r 1
  • s 1
  • More… Less…