Search (20 results, page 1 of 1)

  • × theme_ss:"Data Mining"
  1. Zhang, Z.; Li, Q.; Zeng, D.; Ga, H.: Extracting evolutionary communities in community question answering (2014) 0.03
    0.028653976 = product of:
      0.1146159 = sum of:
        0.1146159 = weight(_text_:evolution in 1286) [ClassicSimilarity], result of:
          0.1146159 = score(doc=1286,freq=8.0), product of:
            0.19585751 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.03697776 = queryNorm
            0.5852004 = fieldWeight in 1286, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1286)
      0.25 = coord(1/4)
    
    Abstract
    With the rapid growth of Web 2.0, community question answering (CQA) has become a prevalent information seeking channel, in which users form interactive communities by posting questions and providing answers. Communities may evolve over time, because of changes in users' interests, activities, and new users joining the network. To better understand user interactions in CQA communities, it is necessary to analyze the community structures and track community evolution over time. Existing work in CQA focuses on question searching or content quality detection, and the important problems of community extraction and evolutionary pattern detection have not been studied. In this article, we propose a probabilistic community model (PCM) to extract overlapping community structures and capture their evolution patterns in CQA. The empirical results show that our algorithm appears to improve the community extraction quality. We show empirically, using the iPhone data set, that interesting community evolution patterns can be discovered, with each evolution pattern reflecting the variation of users' interests over time. Our analysis suggests that individual users could benefit to gain comprehensive information from tracking the transition of products. We also show that the communities provide a decision-making basis for business.
  2. Song, J.; Huang, Y.; Qi, X.; Li, Y.; Li, F.; Fu, K.; Huang, T.: Discovering hierarchical topic evolution in time-stamped documents (2016) 0.02
    0.024313705 = product of:
      0.09725482 = sum of:
        0.09725482 = weight(_text_:evolution in 2853) [ClassicSimilarity], result of:
          0.09725482 = score(doc=2853,freq=4.0), product of:
            0.19585751 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.03697776 = queryNorm
            0.49655905 = fieldWeight in 2853, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=2853)
      0.25 = coord(1/4)
    
    Abstract
    The objective of this paper is to propose a hierarchical topic evolution model (HTEM) that can organize time-varying topics in a hierarchy and discover their evolutions with multiple timescales. In the proposed HTEM, topics near the root of the hierarchy are more abstract and also evolve in the longer timescales than those near the leaves. To achieve this goal, the distance-dependent Chinese restaurant process (ddCRP) is extended to a new nested process that is able to simultaneously model the dependencies among data and the relationship between clusters. The HTEM is proposed based on the new process for time-stamped documents, in which the timestamp is utilized to measure the dependencies among documents. Moreover, an efficient Gibbs sampler is developed for the proposed HTEM. Our experimental results on two popular real-world data sets verify that the proposed HTEM can capture coherent topics and discover their hierarchical evolutions. It also outperforms the baseline model in terms of likelihood on held-out data.
  3. Teich, E.; Degaetano-Ortlieb, S.; Fankhauser, P.; Kermes, H.; Lapshinova-Koltunski, E.: ¬The linguistic construal of disciplinarity : a data-mining approach using register features (2016) 0.02
    0.017192384 = product of:
      0.06876954 = sum of:
        0.06876954 = weight(_text_:evolution in 3015) [ClassicSimilarity], result of:
          0.06876954 = score(doc=3015,freq=2.0), product of:
            0.19585751 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.03697776 = queryNorm
            0.35112026 = fieldWeight in 3015, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=3015)
      0.25 = coord(1/4)
    
    Abstract
    We analyze the linguistic evolution of selected scientific disciplines over a 30-year time span (1970s to 2000s). Our focus is on four highly specialized disciplines at the boundaries of computer science that emerged during that time: computational linguistics, bioinformatics, digital construction, and microelectronics. Our analysis is driven by the question whether these disciplines develop a distinctive language use-both individually and collectively-over the given time period. The data set is the English Scientific Text Corpus (scitex), which includes texts from the 1970s/1980s and early 2000s. Our theoretical basis is register theory. In terms of methods, we combine corpus-based methods of feature extraction (various aggregated features [part-of-speech based], n-grams, lexico-grammatical patterns) and automatic text classification. The results of our research are directly relevant to the study of linguistic variation and languages for specific purposes (LSP) and have implications for various natural language processing (NLP) tasks, for example, authorship attribution, text mining, or training NLP tools.
  4. Schwartz, F.; Fang, Y.C.: Citation data analysis on hydrogeology (2007) 0.01
    0.01146159 = product of:
      0.04584636 = sum of:
        0.04584636 = weight(_text_:evolution in 433) [ClassicSimilarity], result of:
          0.04584636 = score(doc=433,freq=2.0), product of:
            0.19585751 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.03697776 = queryNorm
            0.23408018 = fieldWeight in 433, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.03125 = fieldNorm(doc=433)
      0.25 = coord(1/4)
    
    Abstract
    This article explores the status of research in hydrogeology using data mining techniques. First we try to explain what citation analysis is and review some of the previous work on citation analysis. The main idea in this article is to address some common issues about citation numbers and the use of these data. To validate the use of citation numbers, we compare the citation patterns for Water Resources Research papers in the 1980s with those in the 1990s. The citation growths for highly cited authors from the 1980s are used to examine whether it is possible to predict the citation patterns for highly-cited authors in the 1990s. If the citation data prove to be steady and stable, these numbers then can be used to explore the evolution of science in hydrogeology. The famous quotation, "If you are not the lead dog, the scenery never changes," attributed to Lee Iacocca, points to the importance of an entrepreneurial spirit in all forms of endeavor. In the case of hydrogeological research, impact analysis makes it clear how important it is to be a pioneer. Statistical correlation coefficients are used to retrieve papers among a collection of 2,847 papers before and after 1991 sharing the same topics with 273 papers in 1991 in Water Resources Research. The numbers of papers before and after 1991 are then plotted against various levels of citations for papers in 1991 to compare the distributions of paper population before and after that year. The similarity metrics based on word counts can ensure that the "before" papers are like ancestors and "after" papers are descendants in the same type of research. This exercise gives us an idea of how many papers are populated before and after 1991 (1991 is chosen based on balanced numbers of papers before and after that year). In addition, the impact of papers is measured in terms of citation presented as "percentile," a relative measure based on rankings in one year, in order to minimize the effect of time.
  5. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.01
    0.0058449726 = product of:
      0.02337989 = sum of:
        0.02337989 = product of:
          0.07013967 = sum of:
            0.07013967 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.07013967 = score(doc=4577,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    2. 4.2000 18:01:22
  6. KDD : techniques and applications (1998) 0.01
    0.005009976 = product of:
      0.020039905 = sum of:
        0.020039905 = product of:
          0.060119715 = sum of:
            0.060119715 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.060119715 = score(doc=6783,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  7. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.00
    0.0033399842 = product of:
      0.013359937 = sum of:
        0.013359937 = product of:
          0.04007981 = sum of:
            0.04007981 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.04007981 = score(doc=1737,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    22.11.1998 18:57:22
  8. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.00
    0.0033399842 = product of:
      0.013359937 = sum of:
        0.013359937 = product of:
          0.04007981 = sum of:
            0.04007981 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
              0.04007981 = score(doc=4261,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.30952093 = fieldWeight in 4261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4261)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    17. 7.2002 19:22:06
  9. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.00
    0.0033399842 = product of:
      0.013359937 = sum of:
        0.013359937 = product of:
          0.04007981 = sum of:
            0.04007981 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.04007981 = score(doc=1270,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  10. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.00
    0.0029224863 = product of:
      0.011689945 = sum of:
        0.011689945 = product of:
          0.035069834 = sum of:
            0.035069834 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.035069834 = score(doc=2908,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  11. Lackes, R.; Tillmanns, C.: Data Mining für die Unternehmenspraxis : Entscheidungshilfen und Fallstudien mit führenden Softwarelösungen (2006) 0.00
    0.002504988 = product of:
      0.010019952 = sum of:
        0.010019952 = product of:
          0.030059857 = sum of:
            0.030059857 = weight(_text_:22 in 1383) [ClassicSimilarity], result of:
              0.030059857 = score(doc=1383,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.23214069 = fieldWeight in 1383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1383)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    22. 3.2008 14:46:06
  12. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.00
    0.0020874902 = product of:
      0.008349961 = sum of:
        0.008349961 = product of:
          0.025049882 = sum of:
            0.025049882 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.025049882 = score(doc=668,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.19345059 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    22. 3.2013 19:43:01
  13. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.00
    0.0020874902 = product of:
      0.008349961 = sum of:
        0.008349961 = product of:
          0.025049882 = sum of:
            0.025049882 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.025049882 = score(doc=1605,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  14. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.00
    0.0020874902 = product of:
      0.008349961 = sum of:
        0.008349961 = product of:
          0.025049882 = sum of:
            0.025049882 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.025049882 = score(doc=5011,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    7. 3.2019 16:32:22
  15. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.00
    0.0016699921 = product of:
      0.0066799684 = sum of:
        0.0066799684 = product of:
          0.020039905 = sum of:
            0.020039905 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
              0.020039905 = score(doc=1507,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.15476047 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    22. 4.2003 11:45:36
  16. Hölzig, C.: Google spürt Grippewellen auf : Die neue Anwendung ist bisher auf die USA beschränkt (2008) 0.00
    0.0016699921 = product of:
      0.0066799684 = sum of:
        0.0066799684 = product of:
          0.020039905 = sum of:
            0.020039905 = weight(_text_:22 in 2403) [ClassicSimilarity], result of:
              0.020039905 = score(doc=2403,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.15476047 = fieldWeight in 2403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2403)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    3. 5.1997 8:44:22
  17. Jäger, L.: Von Big Data zu Big Brother (2018) 0.00
    0.0016699921 = product of:
      0.0066799684 = sum of:
        0.0066799684 = product of:
          0.020039905 = sum of:
            0.020039905 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.020039905 = score(doc=5234,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    22. 1.2018 11:33:49
  18. Lischka, K.: Spurensuche im Datenwust : Data-Mining-Software fahndet nach kriminellen Mitarbeitern, guten Kunden - und bald vielleicht auch nach Terroristen (2002) 0.00
    0.001252494 = product of:
      0.005009976 = sum of:
        0.005009976 = product of:
          0.015029929 = sum of:
            0.015029929 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.015029929 = score(doc=1178,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.116070345 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1178)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    "Ob man als Terrorist einen Anschlag gegen die Vereinigten Staaten plant, als Kassierer Scheine aus der Kasse unterschlägt oder für bestimmte Produkte besonders gerne Geld ausgibt - einen Unterschied macht Data-Mining-Software da nicht. Solche Programme analysieren riesige Daten- mengen und fällen statistische Urteile. Mit diesen Methoden wollen nun die For- scher des "Information Awaren in den Vereinigten Staaten Spuren von Terroristen in den Datenbanken von Behörden und privaten Unternehmen wie Kreditkartenfirmen finden. 200 Millionen Dollar umfasst der Jahresetat für die verschiedenen Forschungsprojekte. Dass solche Software in der Praxis funktioniert, zeigen die steigenden Umsätze der Anbieter so genannter Customer-Relationship-Management-Software. Im vergangenen Jahr ist das Potenzial für analytische CRM-Anwendungen laut dem Marktforschungsinstitut IDC weltweit um 22 Prozent gewachsen, bis zum Jahr 2006 soll es in Deutschland mit einem jährlichen Plus von 14,1 Prozent so weitergehen. Und das trotz schwacher Konjunktur - oder gerade deswegen. Denn ähnlich wie Data-Mining der USRegierung helfen soll, Terroristen zu finden, entscheiden CRM-Programme heute, welche Kunden für eine Firma profitabel sind. Und welche es künftig sein werden, wie Manuela Schnaubelt, Sprecherin des CRM-Anbieters SAP, beschreibt: "Die Kundenbewertung ist ein zentraler Bestandteil des analytischen CRM. Sie ermöglicht es Unternehmen, sich auf die für sie wichtigen und richtigen Kunden zu fokussieren. Darüber hinaus können Firmen mit speziellen Scoring- Verfahren ermitteln, welche Kunden langfristig in welchem Maße zum Unternehmenserfolg beitragen." Die Folgen der Bewertungen sind für die Betroffenen nicht immer positiv: Attraktive Kunden profitieren von individuellen Sonderangeboten und besonderer Zuwendung. Andere hängen vielleicht so lauge in der Warteschleife des Telefonservice, bis die profitableren Kunden abgearbeitet sind. So könnte eine praktische Umsetzung dessen aussehen, was SAP-Spreche-rin Schnaubelt abstrakt beschreibt: "In vielen Unternehmen wird Kundenbewertung mit der klassischen ABC-Analyse durchgeführt, bei der Kunden anhand von Daten wie dem Umsatz kategorisiert werden. A-Kunden als besonders wichtige Kunden werden anders betreut als C-Kunden." Noch näher am geplanten Einsatz von Data-Mining zur Terroristenjagd ist eine Anwendung, die heute viele Firmen erfolgreich nutzen: Sie spüren betrügende Mitarbeiter auf. Werner Sülzer vom großen CRM-Anbieter NCR Teradata beschreibt die Möglichkeiten so: "Heute hinterlässt praktisch jeder Täter - ob Mitarbeiter, Kunde oder Lieferant - Datenspuren bei seinen wirtschaftskriminellen Handlungen. Es muss vorrangig darum gehen, einzelne Spuren zu Handlungsmustern und Täterprofilen zu verdichten. Das gelingt mittels zentraler Datenlager und hoch entwickelter Such- und Analyseinstrumente." Von konkreten Erfolgen sprich: Entlas-sungen krimineller Mitarbeiter-nach Einsatz solcher Programme erzählen Unternehmen nicht gerne. Matthias Wilke von der "Beratungsstelle für Technologiefolgen und Qualifizierung" (BTQ) der Gewerkschaft Verdi weiß von einem Fall 'aus der Schweiz. Dort setzt die Handelskette "Pick Pay" das Programm "Lord Lose Prevention" ein. Zwei Monate nach Einfüh-rung seien Unterschlagungen im Wert von etwa 200 000 Franken ermittelt worden. Das kostete mehr als 50 verdächtige Kassiererinnen und Kassierer den Job.
  19. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.00
    0.001252494 = product of:
      0.005009976 = sum of:
        0.005009976 = product of:
          0.015029929 = sum of:
            0.015029929 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
              0.015029929 = score(doc=1833,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.116070345 = fieldWeight in 1833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    11. 5.2008 19:49:22
  20. Information visualization in data mining and knowledge discovery (2002) 0.00
    8.3499606E-4 = product of:
      0.0033399842 = sum of:
        0.0033399842 = product of:
          0.010019952 = sum of:
            0.010019952 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.010019952 = score(doc=1789,freq=2.0), product of:
                0.12948982 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03697776 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    23. 3.2008 19:10:22

Languages

  • e 13
  • d 7

Types