Search (59 results, page 3 of 3)

  • × theme_ss:"Data Mining"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Ohly, H.P.: Bibliometric mining : added value from document analysis and retrieval (2008) 0.00
    0.004313929 = product of:
      0.0107848225 = sum of:
        0.004086692 = weight(_text_:a in 2386) [ClassicSimilarity], result of:
          0.004086692 = score(doc=2386,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 2386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2386)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 2386) [ClassicSimilarity], result of:
              0.013396261 = score(doc=2386,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 2386, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2386)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Bibliometrics is understood as statistical analysis of scientific structures and processes. The analyzed data result from information and administrative actions. The demand for quality judgments or the discovering of new structures and information means that Bibliometrics takes on the role of being exploratory and decision supporting. To the extent that it has acquired important features of Data Mining, the analysis of text and internet material can be viewed as an additional challenge. In the sense of an evaluative approach Bibliometrics can also be seen to apply inference procedures as well as navigation tools.
    Type
    a
  2. Schwartz, F.; Fang, Y.C.: Citation data analysis on hydrogeology (2007) 0.00
    0.0034425803 = product of:
      0.008606451 = sum of:
        0.005448922 = weight(_text_:a in 433) [ClassicSimilarity], result of:
          0.005448922 = score(doc=433,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 433, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=433)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 433) [ClassicSimilarity], result of:
              0.006315058 = score(doc=433,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 433, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=433)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article explores the status of research in hydrogeology using data mining techniques. First we try to explain what citation analysis is and review some of the previous work on citation analysis. The main idea in this article is to address some common issues about citation numbers and the use of these data. To validate the use of citation numbers, we compare the citation patterns for Water Resources Research papers in the 1980s with those in the 1990s. The citation growths for highly cited authors from the 1980s are used to examine whether it is possible to predict the citation patterns for highly-cited authors in the 1990s. If the citation data prove to be steady and stable, these numbers then can be used to explore the evolution of science in hydrogeology. The famous quotation, "If you are not the lead dog, the scenery never changes," attributed to Lee Iacocca, points to the importance of an entrepreneurial spirit in all forms of endeavor. In the case of hydrogeological research, impact analysis makes it clear how important it is to be a pioneer. Statistical correlation coefficients are used to retrieve papers among a collection of 2,847 papers before and after 1991 sharing the same topics with 273 papers in 1991 in Water Resources Research. The numbers of papers before and after 1991 are then plotted against various levels of citations for papers in 1991 to compare the distributions of paper population before and after that year. The similarity metrics based on word counts can ensure that the "before" papers are like ancestors and "after" papers are descendants in the same type of research. This exercise gives us an idea of how many papers are populated before and after 1991 (1991 is chosen based on balanced numbers of papers before and after that year). In addition, the impact of papers is measured in terms of citation presented as "percentile," a relative measure based on rankings in one year, in order to minimize the effect of time.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.4, S.518-525
    Type
    a
  3. Keim, D.A.: Datenvisualisierung und Data Mining (2004) 0.00
    0.002940995 = product of:
      0.007352487 = sum of:
        0.0034055763 = weight(_text_:a in 2931) [ClassicSimilarity], result of:
          0.0034055763 = score(doc=2931,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.06369744 = fieldWeight in 2931, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2931)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 2931) [ClassicSimilarity], result of:
              0.007893822 = score(doc=2931,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 2931, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2931)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. 5., völlig neu gefaßte Ausgabe. 2 Bde. Hrsg. von R. Kuhlen, Th. Seeger u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried. Bd.1: Handbuch zur Einführung in die Informationswissenschaft und -praxis
    Type
    a
  4. Fong, A.C.M.: Mining a Web citation database for document clustering (2002) 0.00
    0.0026970792 = product of:
      0.013485395 = sum of:
        0.013485395 = weight(_text_:a in 3940) [ClassicSimilarity], result of:
          0.013485395 = score(doc=3940,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.25222903 = fieldWeight in 3940, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=3940)
      0.2 = coord(1/5)
    
    Type
    a
  5. Liu, W.; Weichselbraun, A.; Scharl, A.; Chang, E.: Semi-automatic ontology extension using spreading activation (2005) 0.00
    0.0025228865 = product of:
      0.012614433 = sum of:
        0.012614433 = weight(_text_:a in 3028) [ClassicSimilarity], result of:
          0.012614433 = score(doc=3028,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.23593865 = fieldWeight in 3028, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3028)
      0.2 = coord(1/5)
    
    Abstract
    This paper describes a system to semi-automatically extend and refine ontologies by mining textual data from the Web sites of international online media. Expanding a seed ontology creates a semantic network through co-occurrence analysis, trigger phrase analysis, and disambiguation based on the WordNet lexical dictionary. Spreading activation then processes this semantic network to find the most probable candidates for inclusion in an extended ontology. Approaches to identifying hierarchical relationships such as subsumption, head noun analysis and WordNet consultation are used to confirm and classify the found relationships. Using a seed ontology on "climate change" as an example, this paper demonstrates how spreading activation improves the result by naturally integrating the mentioned methods.
    Type
    a
  6. Seidenfaden, U.: Schürfen in Datenbergen : Data-Mining soll möglichst viel Information zu Tage fördern (2001) 0.00
    0.0025164585 = product of:
      0.0062911464 = sum of:
        0.0023839036 = weight(_text_:a in 6923) [ClassicSimilarity], result of:
          0.0023839036 = score(doc=6923,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.044588212 = fieldWeight in 6923, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6923)
        0.003907243 = product of:
          0.007814486 = sum of:
            0.007814486 = weight(_text_:information in 6923) [ClassicSimilarity], result of:
              0.007814486 = score(doc=6923,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0960027 = fieldWeight in 6923, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=6923)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    "Fast alles wird heute per Computer erfasst. Kaum einer überblickt noch die enormen Datenmengen, die sich in Unternehmen, Universitäten und Verwaltung ansammeln. Allein in den öffentlich zugänglichen Datenbanken der Genforscher fallen pro Woche rund 4,5 Gigabyte an neuer Information an. "Vom potentiellen Wissen in den Datenbanken wird bislang aber oft nur ein Teil genutzt", meint Stefan Wrobel vom Lehrstuhl für Wissensentdeckung und Maschinelles Lernen der Otto-von-Guericke-Universität in Magdeburg. Sein Doktorand Mark-Andre Krogel hat soeben mit einem neuen Verfahren zur Datenbankrecherche in San Francisco einen inoffiziellen Weltmeister-Titel in der Disziplin "Data-Mining" gewonnen. Dieser Daten-Bergbau arbeitet im Unterschied zur einfachen Datenbankabfrage, die sich einfacher statistischer Methoden bedient, zusätzlich mit künstlicher Intelligenz und Visualisierungsverfahren, um Querverbindungen zu finden. "Das erleichtert die Suche nach verborgenen Zusammenhängen im Datenmaterial ganz erheblich", so Wrobel. Die Wirtschaft setzt Data-Mining bereits ein, um das Kundenverhalten zu untersuchen und vorherzusagen. "Stellen sie sich ein Unternehmen mit einer breiten Produktpalette und einem großen Kundenstamm vor", erklärt Wrobel. "Es kann seinen Erfolg maximieren, wenn es Marketing-Post zielgerichtet an seine Kunden verschickt. Wer etwa gerade einen PC gekauft hat, ist womöglich auch an einem Drucker oder Scanner interessiert." In einigen Jahren könnte ein Analysemodul den Manager eines Unternehmens selbständig informieren, wenn ihm etwas Ungewöhnliches aufgefallen ist. Das muss nicht immer positiv für den Kunden sein. Data-Mining ließe sich auch verwenden, um die Lebensdauer von Geschäftsbeziehungen zu prognostizieren. Für Kunden mit geringen Kaufinteressen würden Reklamationen dann längere Bearbeitungszeiten nach sich ziehen. Im konkreten Projekt von Mark-Andre Krogel ging es um die Vorhersage von Protein-Funktionen. Proteine sind Eiweißmoleküle, die fast alle Stoffwechselvorgänge im menschlichen Körper steuern. Sie sind daher die primären Ziele von Wirkstoffen zur Behandlung von Erkrankungen. Das erklärt das große Interesse der Pharmaindustrie. Experimentelle Untersuchungen, die Aufschluss über die Aufgaben der über 100 000 Eiweißmoleküle im menschlichen Körper geben können, sind mit einem hohen Zeitaufwand verbunden. Die Forscher möchten deshalb die Zeit verkürzen, indem sie das vorhandene Datenmaterial mit Hilfe von Data-Mining auswerten. Aus der im Humangenomprojekt bereits entschlüsselten Abfolge der Erbgut-Bausteine lässt sich per Datenbankanalyse die Aneinanderreihung bestimmter Aminosäuren zu einem Protein vorhersagen. Andere Datenbanken wiederum enthalten Informationen, welche Struktur ein Protein mit einer bestimmten vorgegebenen Funktion haben könnte. Aus bereits bekannten Strukturelementen versuchen die Genforscher dann, auf die mögliche Funktion eines bislang noch unbekannten Eiweißmoleküls zu schließen.- Fakten Verschmelzen - Bei diesem theoretischen Ansatz kommt es darauf an, die in Datenbanken enthaltenen Informationen so zu verknüpfen, dass die Ergebnisse mit hoher Wahrscheinlichkeit mit der Realität übereinstimmen. "Im Rahmen des Wettbewerbs erhielten wir Tabellen als Vorgabe, in denen Gene und Chromosomen nach bestimmten Gesichtspunkten klassifiziert waren", erläutert Krogel. Von einigen Genen war bekannt, welche Proteine sie produzieren und welche Aufgabe diese Eiweißmoleküle besitzen. Diese Beispiele dienten dem von Krogel entwickelten Programm dann als Hilfe, für andere Gene vorherzusagen, welche Funktionen die von ihnen erzeugten Proteine haben. "Die Genauigkeit der Vorhersage lag bei den gestellten Aufgaben bei über 90 Prozent", stellt Krogel fest. Allerdings könne man in der Praxis nicht davon ausgehen, dass alle Informationen aus verschiedenen Datenbanken in einem einheitlichen Format vorliegen. Es gebe verschiedene Abfragesprachen der Datenbanken, und die Bezeichnungen von Eiweißmolekülen mit gleicher Aufgabe seien oftmals uneinheitlich. Die Magdeburger Informatiker arbeiten deshalb in der DFG-Forschergruppe "Informationsfusion" an Methoden, um die verschiedenen Datenquellen besser zu erschließen."
    Type
    a
  7. Kulathuramaiyer, N.; Maurer, H.: Implications of emerging data mining (2009) 0.00
    0.0021624742 = product of:
      0.010812371 = sum of:
        0.010812371 = weight(_text_:a in 3144) [ClassicSimilarity], result of:
          0.010812371 = score(doc=3144,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20223314 = fieldWeight in 3144, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3144)
      0.2 = coord(1/5)
    
    Abstract
    Data Mining describes a technology that discovers non-trivial hidden patterns in a large collection of data. Although this technology has a tremendous impact on our lives, the invaluable contributions of this invisible technology often go unnoticed. This paper discusses advances in data mining while focusing on the emerging data mining capability. Such data mining applications perform multidimensional mining on a wide variety of heterogeneous data sources, providing solutions to many unresolved problems. This paper also highlights the advantages and disadvantages arising from the ever-expanding scope of data mining. Data Mining augments human intelligence by equipping us with a wealth of knowledge and by empowering us to perform our daily tasks better. As the mining scope and capacity increases, users and organizations become more willing to compromise privacy. The huge data stores of the 'master miners' allow them to gain deep insights into individual lifestyles and their social and behavioural patterns. Data integration and analysis capability of combining business and financial trends together with the ability to deterministically track market changes will drastically affect our lives.
    Source
    Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
    Type
    a
  8. Maaten, L. van den: Learning a parametric embedding by preserving local structure (2009) 0.00
    0.0021322283 = product of:
      0.010661141 = sum of:
        0.010661141 = weight(_text_:a in 3883) [ClassicSimilarity], result of:
          0.010661141 = score(doc=3883,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19940455 = fieldWeight in 3883, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3883)
      0.2 = coord(1/5)
    
    Abstract
    The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.
    Type
    a
  9. Maaten, L. van den; Hinton, G.: Visualizing data using t-SNE (2008) 0.00
    0.0019264851 = product of:
      0.009632425 = sum of:
        0.009632425 = weight(_text_:a in 3888) [ClassicSimilarity], result of:
          0.009632425 = score(doc=3888,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 3888, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3888)
      0.2 = coord(1/5)
    
    Abstract
    We present a new technique called "t-SNE" that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.
    Type
    a
  10. Loh, S.; Oliveira, J.P.M. de; Gastal, F.L.: Knowledge discovery in textual documentation : qualitative and quantitative analyses (2001) 0.00
    0.0018276243 = product of:
      0.009138121 = sum of:
        0.009138121 = weight(_text_:a in 4482) [ClassicSimilarity], result of:
          0.009138121 = score(doc=4482,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 4482, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4482)
      0.2 = coord(1/5)
    
    Abstract
    This paper presents an approach for performing knowledge discovery in texts through qualitative and quantitative analyses of high-level textual characteristics. Instead of applying mining techniques on attribute values, terms or keywords extracted from texts, the discovery process works over conceptss identified in texts. Concepts represent real world events and objects, and they help the user to understand ideas, trends, thoughts, opinions and intentions present in texts. The approach combines a quasi-automatic categorisation task (for qualitative analysis) with a mining process (for quantitative analysis). The goal is to find new and useful knowledge inside a textual collection through the use of mining techniques applied over concepts (representing text content). In this paper, an application of the approach to medical records of a psychiatric hospital is presented. The approach helps physicians to extract knowledge about patients and diseases. This knowledge may be used for epidemiological studies, for training professionals and it may be also used to support physicians to diagnose and evaluate diseases.
    Type
    a
  11. Kruse, R.; Borgelt, C.: Suche im Datendschungel (2002) 0.00
    0.0016346768 = product of:
      0.008173384 = sum of:
        0.008173384 = weight(_text_:a in 1087) [ClassicSimilarity], result of:
          0.008173384 = score(doc=1087,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 1087, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=1087)
      0.2 = coord(1/5)
    
    Type
    a
  12. Wrobel, S.: Lern- und Entdeckungsverfahren (2002) 0.00
    0.0016346768 = product of:
      0.008173384 = sum of:
        0.008173384 = weight(_text_:a in 1105) [ClassicSimilarity], result of:
          0.008173384 = score(doc=1105,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 1105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=1105)
      0.2 = coord(1/5)
    
    Type
    a
  13. Borgelt, C.; Kruse, R.: Unsicheres Wissen nutzen (2002) 0.00
    0.0013622305 = product of:
      0.0068111527 = sum of:
        0.0068111527 = weight(_text_:a in 1104) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=1104,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 1104, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1104)
      0.2 = coord(1/5)
    
    Type
    a
  14. Baumgartner, R.: Methoden und Werkzeuge zur Webdatenextraktion (2006) 0.00
    0.0013485396 = product of:
      0.0067426977 = sum of:
        0.0067426977 = weight(_text_:a in 5808) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=5808,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 5808, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5808)
      0.2 = coord(1/5)
    
    Source
    Semantic Web: Wege zur vernetzten Wissensgesellschaft. Hrsg.: T. Pellegrini, u. A. Blumauer
    Type
    a
  15. Sperlich, T.: ¬Die Zukunft hat schon begonnen : Visualisierungssoftware in der praktischen Anwendung (2000) 0.00
    0.0010897844 = product of:
      0.005448922 = sum of:
        0.005448922 = weight(_text_:a in 5059) [ClassicSimilarity], result of:
          0.005448922 = score(doc=5059,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 5059, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5059)
      0.2 = coord(1/5)
    
    Type
    a
  16. Brückner, T.; Dambeck, H.: Sortierautomaten : Grundlagen der Textklassifizierung (2003) 0.00
    0.0010897844 = product of:
      0.005448922 = sum of:
        0.005448922 = weight(_text_:a in 2398) [ClassicSimilarity], result of:
          0.005448922 = score(doc=2398,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 2398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=2398)
      0.2 = coord(1/5)
    
    Type
    a
  17. Schwartz, D.: Graphische Datenanalyse für digitale Bibliotheken : Leistungs- und Funktionsumfang moderner Analyse- und Visualisierungsinstrumente (2006) 0.00
    9.5356145E-4 = product of:
      0.004767807 = sum of:
        0.004767807 = weight(_text_:a in 30) [ClassicSimilarity], result of:
          0.004767807 = score(doc=30,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 30, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=30)
      0.2 = coord(1/5)
    
    Type
    a
  18. Heyer, G.; Läuter, M.; Quasthoff, U.; Wolff, C.: Texttechnologische Anwendungen am Beispiel Text Mining (2000) 0.00
    8.173384E-4 = product of:
      0.004086692 = sum of:
        0.004086692 = weight(_text_:a in 5565) [ClassicSimilarity], result of:
          0.004086692 = score(doc=5565,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 5565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5565)
      0.2 = coord(1/5)
    
    Type
    a
  19. Klein, H.: Web Content Mining (2004) 0.00
    5.448922E-4 = product of:
      0.002724461 = sum of:
        0.002724461 = weight(_text_:a in 3154) [ClassicSimilarity], result of:
          0.002724461 = score(doc=3154,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.050957955 = fieldWeight in 3154, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3154)
      0.2 = coord(1/5)
    
    Type
    a

Languages

  • e 43
  • d 16