Search (22 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10109107 = sum of:
      0.080492064 = product of:
        0.24147618 = sum of:
          0.24147618 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24147618 = score(doc=562,freq=2.0), product of:
              0.42965913 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.050679237 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020599011 = product of:
        0.041198023 = sum of:
          0.041198023 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041198023 = score(doc=562,freq=2.0), product of:
              0.17747006 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679237 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.07
    0.073640086 = product of:
      0.14728017 = sum of:
        0.14728017 = sum of:
          0.10608215 = weight(_text_:maps in 690) [ClassicSimilarity], result of:
            0.10608215 = score(doc=690,freq=2.0), product of:
              0.28477904 = queryWeight, product of:
                5.619245 = idf(docFreq=435, maxDocs=44218)
                0.050679237 = queryNorm
              0.37250686 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.619245 = idf(docFreq=435, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
          0.041198023 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
            0.041198023 = score(doc=690,freq=2.0), product of:
              0.17747006 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679237 = queryNorm
              0.23214069 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
      0.5 = coord(1/2)
    
    Abstract
    We describe the latent semantic indexing subspace signature model (LSISSM) for semantic content representation of unstructured text. Grounded on singular value decomposition, the model represents terms and documents by the distribution signatures of their statistical contribution across the top-ranking latent concept dimensions. LSISSM matches term signatures with document signatures according to their mapping coherence between latent semantic indexing (LSI) term subspace and LSI document subspace. LSISSM does feature reduction and finds a low-rank approximation of scalable and sparse term-document matrices. Experiments demonstrate that this approach significantly improves the performance of major clustering algorithms such as standard K-means and self-organizing maps compared with the vector space model and the traditional LSI model. The unique contribution ranking mechanism in LSISSM also improves the initialization of standard K-means compared with random seeding procedure, which sometimes causes low efficiency and effectiveness of clustering. A two-stage initialization strategy based on LSISSM significantly reduces the running time of standard K-means procedures.
    Date
    23. 3.2013 13:22:36
  3. Frank, E.; Paynter, G.W.: Predicting Library of Congress Classifications from Library of Congress Subject Headings (2004) 0.03
    0.026520537 = product of:
      0.053041074 = sum of:
        0.053041074 = product of:
          0.10608215 = sum of:
            0.10608215 = weight(_text_:maps in 2218) [ClassicSimilarity], result of:
              0.10608215 = score(doc=2218,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.37250686 = fieldWeight in 2218, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2218)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper addresses the problem of automatically assigning a Library of Congress Classification (LCC) to a work given its set of Library of Congress Subject Headings (LCSH). LCCs are organized in a tree: The root node of this hierarchy comprises all possible topics, and leaf nodes correspond to the most specialized topic areas defined. We describe a procedure that, given a resource identified by its LCSH, automatically places that resource in the LCC hierarchy. The procedure uses machine learning techniques and training data from a large library catalog to learn a model that maps from sets of LCSH to classifications from the LCC tree. We present empirical results for our technique showing its accuracy an an independent collection of 50,000 LCSH/LCC pairs.
  4. Mu, T.; Goulermas, J.Y.; Korkontzelos, I.; Ananiadou, S.: Descriptive document clustering via discriminant learning in a co-embedded space of multilevel similarities (2016) 0.02
    0.022100445 = product of:
      0.04420089 = sum of:
        0.04420089 = product of:
          0.08840178 = sum of:
            0.08840178 = weight(_text_:maps in 2496) [ClassicSimilarity], result of:
              0.08840178 = score(doc=2496,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.31042236 = fieldWeight in 2496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2496)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Descriptive document clustering aims at discovering clusters of semantically interrelated documents together with meaningful labels to summarize the content of each document cluster. In this work, we propose a novel descriptive clustering framework, referred to as CEDL. It relies on the formulation and generation of 2 types of heterogeneous objects, which correspond to documents and candidate phrases, using multilevel similarity information. CEDL is composed of 5 main processing stages. First, it simultaneously maps the documents and candidate phrases into a common co-embedded space that preserves higher-order, neighbor-based proximities between the combined sets of documents and phrases. Then, it discovers an approximate cluster structure of documents in the common space. The third stage extracts promising topic phrases by constructing a discriminant model where documents along with their cluster memberships are used as training instances. Subsequently, the final cluster labels are selected from the topic phrases using a ranking scheme using multiple scores based on the extracted co-embedding information and the discriminant output. The final stage polishes the initial clusters to reduce noise and accommodate the multitopic nature of documents. The effectiveness and competitiveness of CEDL is demonstrated qualitatively and quantitatively with experiments using document databases from different application fields.
  5. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.02
    0.020599011 = product of:
      0.041198023 = sum of:
        0.041198023 = product of:
          0.082396045 = sum of:
            0.082396045 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.082396045 = score(doc=1046,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  6. Hoffmann, R.: Entwicklung einer benutzerunterstützten automatisierten Klassifikation von Web - Dokumenten : Untersuchung gegenwärtiger Methoden zur automatisierten Dokumentklassifikation und Implementierung eines Prototyps zum verbesserten Information Retrieval für das xFIND System (2002) 0.02
    0.017680356 = product of:
      0.035360713 = sum of:
        0.035360713 = product of:
          0.070721425 = sum of:
            0.070721425 = weight(_text_:maps in 4197) [ClassicSimilarity], result of:
              0.070721425 = score(doc=4197,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.2483379 = fieldWeight in 4197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4197)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Das unüberschaubare und permanent wachsende Angebot von Informationen im Internet ermöglicht es den Menschen nicht mehr, dieses inhaltlich zu erfassen oder gezielt nach Informationen zu suchen. Einen Lösungsweg zur verbesserten Informationsauffindung stellt hierbei die Kategorisierung bzw. Klassifikation der Informationen auf Basis ihres thematischen Inhaltes dar. Diese thematische Klassifikation kann sowohl anhand manueller (intellektueller) Methoden als auch durch automatisierte Verfahren erfolgen. Doch beide Ansätze für sich konnten die an sie gestellten Erwartungen bis zum heutigen Tag nur unzureichend erfüllen. Im Rahmen dieser Arbeit soll daher der naheliegende Ansatz, die beiden Methoden sinnvoll zu verknüpfen, untersucht werden. Im ersten Teil dieser Arbeit, dem Untersuchungsbereich, wird einleitend das Problem des Informationsüberangebots in unserer Gesellschaft erläutert und gezeigt, dass die Kategorisierung bzw. Klassifikation dieser Informationen speziell im Internet sinnvoll erscheint. Die prinzipiellen Möglichkeiten der Themenzuordnung von Dokumenten zur Verbesserung der Wissensverwaltung und Wissensauffindung werden beschrieben. Dabei werden unter anderem verschiedene Klassifikationsschemata, Topic Maps und semantische Netze vorgestellt. Schwerpunkt des Untersuchungsbereiches ist die Beschreibung automatisierter Methoden zur Themenzuordnung. Neben einem Überblick über die gebräuchlichsten Klassifikations-Algorithmen werden sowohl am Markt existierende Systeme sowie Forschungsansätze und frei verfügbare Module zur automatischen Klassifikation vorgestellt. Berücksichtigt werden auch Systeme, die zumindest teilweise den erwähnten Ansatz der Kombination von manuellen und automatischen Methoden unterstützen. Auch die in Zusammenhang mit der Klassifikation von Dokumenten im Internet auftretenden Probleme werden aufgezeigt. Die im Untersuchungsbereich gewonnenen Erkenntnisse fließen in die Entwicklung eines Moduls zur benutzerunterstützten, automatischen Dokumentklassifikation im Rahmen des xFIND Systems (extended Framework for Information Discovery) ein. Dieses an der technischen Universität Graz konzipierte Framework stellt die Basis für eine Vielzahl neuer Ideen zur Verbesserung des Information Retrieval dar. Der im Gestaltungsbereich entwickelte Lösungsansatz sieht zunächst die Verwendung bereits im System vorhandener, manuell klassifizierter Dokumente, Server oder Serverbereiche als Grundlage für die automatische Klassifikation vor. Nach erfolgter automatischer Klassifikation können in einem nächsten Schritt dann Autoren und Administratoren die Ergebnisse im Rahmen einer Benutzerunterstützung anpassen. Dabei kann das kollektive Benutzerverhalten durch die Möglichkeit eines Votings - mittels Zustimmung bzw. Ablehnung der Klassifikationsergebnisse - Einfluss finden. Das Wissen von Fachexperten und Benutzern trägt somit letztendlich zur Verbesserung der automatischen Klassifikation bei. Im Gestaltungsbereich werden die grundlegenden Konzepte, der Aufbau und die Funktionsweise des entwickelten Moduls beschrieben, sowie eine Reihe von Vorschlägen und Ideen zur Weiterentwicklung der benutzerunterstützten automatischen Dokumentklassifikation präsentiert.
  7. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.017165843 = product of:
      0.034331687 = sum of:
        0.034331687 = product of:
          0.06866337 = sum of:
            0.06866337 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.06866337 = score(doc=611,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  8. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.017165843 = product of:
      0.034331687 = sum of:
        0.034331687 = product of:
          0.06866337 = sum of:
            0.06866337 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.06866337 = score(doc=2748,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  9. Dolin, R.; Agrawal, D.; El Abbadi, A.; Pearlman, J.: Using automated classification for summarizing and selecting heterogeneous information sources (1998) 0.01
    0.013260269 = product of:
      0.026520537 = sum of:
        0.026520537 = product of:
          0.053041074 = sum of:
            0.053041074 = weight(_text_:maps in 1253) [ClassicSimilarity], result of:
              0.053041074 = score(doc=1253,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.18625343 = fieldWeight in 1253, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1253)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information retrieval over the Internet increasingly requires the filtering of thousands of heterogeneous information sources. Important sources of information include not only traditional databases with structured data and queries, but also increasing numbers of non-traditional, semi- or unstructured collections such as Web sites, FTP archives, etc. As the number and variability of sources increases, new ways of automatically summarizing, discovering, and selecting collections relevant to a user's query are needed. One such method involves the use of classification schemes, such as the Library of Congress Classification (LCC), within which a collection may be represented based on its content, irrespective of the structure of the actual data or documents. For such a system to be useful in a large-scale distributed environment, it must be easy to use for both collection managers and users. As a result, it must be possible to classify documents automatically within a classification scheme. Furthermore, there must be a straightforward and intuitive interface with which the user may use the scheme to assist in information retrieval (IR). Our work with the Alexandria Digital Library (ADL) Project focuses on geo-referenced information, whether text, maps, aerial photographs, or satellite images. As a result, we have emphasized techniques which work with both text and non-text, such as combined textual and graphical queries, multi-dimensional indexing, and IR methods which are not solely dependent on words or phrases. Part of this work involves locating relevant online sources of information. In particular, we have designed and are currently testing aspects of an architecture, Pharos, which we believe will scale up to 1.000.000 heterogeneous sources. Pharos accommodates heterogeneity in content and format, both among multiple sources as well as within a single source. That is, we consider sources to include Web sites, FTP archives, newsgroups, and full digital libraries; all of these systems can include a wide variety of content and multimedia data formats. Pharos is based on the use of hierarchical classification schemes. These include not only well-known 'subject' (or 'concept') based schemes such as the Dewey Decimal System and the LCC, but also, for example, geographic classifications, which might be constructed as layers of smaller and smaller hierarchical longitude/latitude boxes. Pharos is designed to work with sophisticated queries which utilize subjects, geographical locations, temporal specifications, and other types of information domains. The Pharos architecture requires that hierarchically structured collection metadata be extracted so that it can be partitioned in such a way as to greatly enhance scalability. Automated classification is important to Pharos because it allows information sources to extract the requisite collection metadata automatically that must be distributed.
  10. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.01201609 = product of:
      0.02403218 = sum of:
        0.02403218 = product of:
          0.04806436 = sum of:
            0.04806436 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.04806436 = score(doc=141,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
  11. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.01201609 = product of:
      0.02403218 = sum of:
        0.02403218 = product of:
          0.04806436 = sum of:
            0.04806436 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.04806436 = score(doc=2338,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  12. Automatic classification research at OCLC (2002) 0.01
    0.01201609 = product of:
      0.02403218 = sum of:
        0.02403218 = product of:
          0.04806436 = sum of:
            0.04806436 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.04806436 = score(doc=1563,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 9:22:09
  13. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.01201609 = product of:
      0.02403218 = sum of:
        0.02403218 = product of:
          0.04806436 = sum of:
            0.04806436 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.04806436 = score(doc=1673,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  14. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.01201609 = product of:
      0.02403218 = sum of:
        0.02403218 = product of:
          0.04806436 = sum of:
            0.04806436 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.04806436 = score(doc=5273,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
  15. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.01201609 = product of:
      0.02403218 = sum of:
        0.02403218 = product of:
          0.04806436 = sum of:
            0.04806436 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.04806436 = score(doc=2560,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2008 18:31:54
  16. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.01
    0.010299506 = product of:
      0.020599011 = sum of:
        0.020599011 = product of:
          0.041198023 = sum of:
            0.041198023 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.041198023 = score(doc=2760,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:11:54
  17. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.01
    0.010299506 = product of:
      0.020599011 = sum of:
        0.020599011 = product of:
          0.041198023 = sum of:
            0.041198023 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.041198023 = score(doc=3051,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 19:51:28
  18. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.01
    0.010299506 = product of:
      0.020599011 = sum of:
        0.020599011 = product of:
          0.041198023 = sum of:
            0.041198023 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.041198023 = score(doc=2158,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 8.2015 19:22:04
  19. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.01
    0.008582922 = product of:
      0.017165843 = sum of:
        0.017165843 = product of:
          0.034331687 = sum of:
            0.034331687 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
              0.034331687 = score(doc=2765,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.19345059 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:14:43
  20. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.01
    0.008582922 = product of:
      0.017165843 = sum of:
        0.017165843 = product of:
          0.034331687 = sum of:
            0.034331687 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.034331687 = score(doc=1107,freq=2.0), product of:
                0.17747006 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679237 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28.10.2013 19:22:57