Search (72 results, page 1 of 4)

  • × theme_ss:"Data Mining"
  1. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.02
    0.022207038 = product of:
      0.059218768 = sum of:
        0.0097262105 = weight(_text_:retrieval in 1833) [ClassicSimilarity], result of:
          0.0097262105 = score(doc=1833,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.10026272 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.005272118 = product of:
          0.010544236 = sum of:
            0.010544236 = weight(_text_:system in 1833) [ClassicSimilarity], result of:
              0.010544236 = score(doc=1833,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.104393914 = fieldWeight in 1833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.5 = coord(1/2)
        0.04422044 = sum of:
          0.031185552 = weight(_text_:etc in 1833) [ClassicSimilarity], result of:
            0.031185552 = score(doc=1833,freq=2.0), product of:
              0.17370372 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.032069415 = queryNorm
              0.17953302 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
          0.013034889 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.013034889 = score(doc=1833,freq=2.0), product of:
              0.112301625 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.032069415 = queryNorm
              0.116070345 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
      0.375 = coord(3/8)
    
    Abstract
    Als in den siebziger Jahren des vergangenen Jahrhunderts immer häufiger die Bezeichnung Informationsmanager für Leute propagiert wurde, die bis dahin als Dokumentare firmierten, wurde dies in den etablierten Kreisen der Archivare und Bibliothekare gelegentlich belächelt und als Zeichen einer Identitätskrise oder jedenfalls einer Verunsicherung des damit überschriebenen Berufsbilds gewertet. Für den Berufsstand der Medienarchivare/Mediendokumentare, die sich seit 1960 in der Fachgruppe 7 des Vereins, später Verbands deutscher Archivare (VdA) organisieren, gehörte diese Verortung im Zeichen neuer inhaltlicher Herausforderungen (Informationsflut) und Technologien (EDV) allerdings schon früh zu den Selbstverständlichkeiten des Berufsalltags. "Halt, ohne uns geht es nicht!" lautete die Überschrift eines Artikels im Verbandsorgan "Info 7", der sich mit der Einrichtung von immer mächtigeren Leitungsnetzen und immer schnelleren Datenautobahnen beschäftigte. Information, Informationsgesellschaft: diese Begriffe wurden damals fast nur im technischen Sinne verstanden. Die informatisierte, nicht die informierte Gesellschaft stand im Vordergrund - was wiederum Kritiker auf den Plan rief, von Joseph Weizenbaum in den USA bis hin zu den Informations-Ökologen in Bremen. Bei den nationalen, manchmal auch nur regionalen Projekten und Modellversuchen mit Datenautobahnen - auch beim frühen Btx - war nie so recht deutlich geworden, welche Inhalte in welcher Gestalt durch diese Netze und Straßen gejagt werden sollten und wer diese Inhalte eigentlich selektieren, portionieren, positionieren, kurz: managen sollte. Spätestens mit dem World Wide Web sind diese Projekte denn auch obsolet geworden, jedenfalls was die Hardware und Software anging. Geblieben ist das Thema Inhalte (neudeutsch: Content). Und - immer drängender im nicht nur technischen Verständnis - das Thema Informationsmanagement. MedienInformationsManagement war die Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar überschrieben, und auch die Folgetagung 2001 in Köln, die der multimedialen Produktion einen dokumentarischen Pragmatismus gegenüber stellte, handelte vom Geschäftsfeld Content und von Content-Management-Systemen. Die in diesem 6. Band der Reihe Beiträge zur Mediendokumentation versammelten Vorträge und Diskussionsbeiträge auf diesen beiden Tagungen beleuchten das Titel-Thema aus den verschiedensten Blickwinkeln: archivarischen, dokumentarischen, kaufmännischen, berufsständischen und juristischen. Deutlich wird dabei, daß die Berufsbezeichnung Medienarchivarln/Mediendokumentarln ziemlich genau für all das steht, was heute mit sog. alten wie neuen Medien im organisatorischen, d.h. ordnenden und vermittelnden Sinne geschieht. Im besonderen Maße trifft dies auf das Internet und die aus ihm geborenen Intranets zu. Beide bedürfen genauso der ordnenden Hand, die sich an den alten Medien, an Buch, Zeitung, Tonträger, Film etc. geschult hat, denn sie leben zu großen Teilen davon. Daß das Internet gleichwohl ein Medium sui generis ist und die alten Informationsberufe vor ganz neue Herausforderungen stellt - auch das durchzieht die Beiträge von Weimar und Köln.
    Content
    Enthält u.a. die Beiträge (Dokumentarische Aspekte): Günter Perers/Volker Gaese: Das DocCat-System in der Textdokumentation von Gr+J (Weimar 2000) Thomas Gerick: Finden statt suchen. Knowledge Retrieval in Wissensbanken. Mit organisiertem Wissen zu mehr Erfolg (Weimar 2000) Winfried Gödert: Aufbereitung und Rezeption von Information (Weimar 2000) Elisabeth Damen: Klassifikation als Ordnungssystem im elektronischen Pressearchiv (Köln 2001) Clemens Schlenkrich: Aspekte neuer Regelwerksarbeit - Multimediales Datenmodell für ARD und ZDF (Köln 2001) Josef Wandeler: Comprenez-vous only Bahnhof'? - Mehrsprachigkeit in der Mediendokumentation (Köln 200 1)
    Date
    11. 5.2008 19:49:22
  2. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.02
    0.017625352 = product of:
      0.07050141 = sum of:
        0.05529404 = sum of:
          0.024603218 = weight(_text_:system in 2908) [ClassicSimilarity], result of:
            0.024603218 = score(doc=2908,freq=2.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.2435858 = fieldWeight in 2908, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2908)
          0.030690823 = weight(_text_:29 in 2908) [ClassicSimilarity], result of:
            0.030690823 = score(doc=2908,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.27205724 = fieldWeight in 2908, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2908)
        0.01520737 = product of:
          0.03041474 = sum of:
            0.03041474 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.03041474 = score(doc=2908,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Focuses on the information modelling side of conceptual modelling. Deals with the exploitation of fact verbalisations after finishing the actual information system. Verbalisations are used as input for the design of the so-called information model. Exploits these verbalisation in 4 directions: considers their use for a conceptual query language, the verbalisation of instances, the description of the contents of a database and for the verbalisation of queries in a computer supported query environment. Provides an example session with an envisioned tool for end user query formulations that exploits the verbalisation
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  3. Knowledge management in fuzzy databases (2000) 0.02
    0.0171195 = product of:
      0.068478 = sum of:
        0.032094855 = weight(_text_:retrieval in 4260) [ClassicSimilarity], result of:
          0.032094855 = score(doc=4260,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33085006 = fieldWeight in 4260, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4260)
        0.036383145 = product of:
          0.07276629 = sum of:
            0.07276629 = weight(_text_:etc in 4260) [ClassicSimilarity], result of:
              0.07276629 = score(doc=4260,freq=2.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.41891038 = fieldWeight in 4260, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4260)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The volume presents recent developments in the introduction of fuzzy, probabilistic and rough elements into basic components of fuzzy databases, and their use (notably querying and information retrieval), from the point of view of data mining and knowledge discovery. The main novel aspect of the volume is that issues related to the use of fuzzy elements in databases, database querying, information retrieval, etc. are presented and discussed from the point of view, and for the purpose of data mining and knowledge discovery that are 'hot topics' in recent years
  4. Sánchez, D.; Chamorro-Martínez, J.; Vila, M.A.: Modelling subjectivity in visual perception of orientation for image retrieval (2003) 0.01
    0.01236227 = product of:
      0.04944908 = sum of:
        0.038904842 = weight(_text_:retrieval in 1067) [ClassicSimilarity], result of:
          0.038904842 = score(doc=1067,freq=8.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.40105087 = fieldWeight in 1067, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1067)
        0.010544236 = product of:
          0.021088472 = sum of:
            0.021088472 = weight(_text_:system in 1067) [ClassicSimilarity], result of:
              0.021088472 = score(doc=1067,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20878783 = fieldWeight in 1067, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1067)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In this paper we combine computer vision and data mining techniques to model high-level concepts for image retrieval, on the basis of basic perceptual features of the human visual system. High-level concepts related to these features are learned and represented by means of a set of fuzzy association rules. The concepts so acquired can be used for image retrieval with the advantage that it is not needed to provide an image as a query. Instead, a query is formulated by using the labels that identify the learned concepts as search terms, and the retrieval process calculates the relevance of an image to the query by an inference mechanism. An additional feature of our methodology is that it can capture user's subjectivity. For that purpose, fuzzy sets theory is employed to measure user's assessments about the fulfillment of a concept by an image.
  5. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.01
    0.008729367 = product of:
      0.034917466 = sum of:
        0.017537614 = product of:
          0.03507523 = sum of:
            0.03507523 = weight(_text_:29 in 1270) [ClassicSimilarity], result of:
              0.03507523 = score(doc=1270,freq=2.0), product of:
                0.11281017 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.032069415 = queryNorm
                0.31092256 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
        0.017379852 = product of:
          0.034759704 = sum of:
            0.034759704 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.034759704 = score(doc=1270,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  6. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.01
    0.007859709 = product of:
      0.031438835 = sum of:
        0.014058981 = product of:
          0.028117962 = sum of:
            0.028117962 = weight(_text_:system in 1737) [ClassicSimilarity], result of:
              0.028117962 = score(doc=1737,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27838376 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
        0.017379852 = product of:
          0.034759704 = sum of:
            0.034759704 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.034759704 = score(doc=1737,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Defines digital libraries and discusses the effects of new technology on librarians. Examines the different viewpoints of librarians and information technologists on digital libraries. Describes the development of a digital library at the National Drug Intelligence Center, USA, which was carried out in collaboration with information technology experts. The system is based on Web enabled search technology to find information, data visualization and data mining to visualize it and use of SGML as an information standard to store it
    Date
    22.11.1998 18:57:22
  7. Fenstermacher, K.D.; Ginsburg, M.: Client-side monitoring for Web mining (2003) 0.01
    0.007499164 = product of:
      0.029996656 = sum of:
        0.019452421 = weight(_text_:retrieval in 1611) [ClassicSimilarity], result of:
          0.019452421 = score(doc=1611,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.20052543 = fieldWeight in 1611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1611)
        0.010544236 = product of:
          0.021088472 = sum of:
            0.021088472 = weight(_text_:system in 1611) [ClassicSimilarity], result of:
              0.021088472 = score(doc=1611,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20878783 = fieldWeight in 1611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1611)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    "Garbage in, garbage out" is a well-known phrase in computer analysis, and one that comes to mind when mining Web data to draw conclusions about Web users. The challenge is that data analysts wish to infer patterns of client-side behavior from server-side data. However, because only a fraction of the user's actions ever reaches the Web server, analysts must rely an incomplete data. In this paper, we propose a client-side monitoring system that is unobtrusive and supports flexible data collection. Moreover, the proposed framework encompasses client-side applications beyond the Web browser. Expanding monitoring beyond the browser to incorporate standard office productivity tools enables analysts to derive a much richer and more accurate picture of user behavior an the Web.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
  8. Liu, Y.; Huang, X.; An, A.: Personalized recommendation with adaptive mixture of markov models (2007) 0.01
    0.0071592135 = product of:
      0.028636854 = sum of:
        0.016210351 = weight(_text_:retrieval in 606) [ClassicSimilarity], result of:
          0.016210351 = score(doc=606,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.16710453 = fieldWeight in 606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=606)
        0.012426502 = product of:
          0.024853004 = sum of:
            0.024853004 = weight(_text_:system in 606) [ClassicSimilarity], result of:
              0.024853004 = score(doc=606,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.24605882 = fieldWeight in 606, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=606)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    With more and more information available on the Internet, the task of making personalized recommendations to assist the user's navigation has become increasingly important. Considering there might be millions of users with different backgrounds accessing a Web site everyday, it is infeasible to build a separate recommendation system for each user. To address this problem, clustering techniques can first be employed to discover user groups. Then, user navigation patterns for each group can be discovered, to allow the adaptation of a Web site to the interest of each individual group. In this paper, we propose to model user access sequences as stochastic processes, and a mixture of Markov models based approach is taken to cluster users and to capture the sequential relationships inherent in user access histories. Several important issues that arise in constructing the Markov models are also addressed. The first issue lies in the complexity of the mixture of Markov models. To improve the efficiency of building/maintaining the mixture of Markov models, we develop a lightweight adapt-ive algorithm to update the model parameters without recomputing model parameters from scratch. The second issue concerns the proper selection of training data for building the mixture of Markov models. We investigate two different training data selection strategies and perform extensive experiments to compare their effectiveness on a real dataset that is generated by a Web-based knowledge management system, Livelink.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
  9. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.01
    0.007109033 = product of:
      0.028436132 = sum of:
        0.017573725 = product of:
          0.03514745 = sum of:
            0.03514745 = weight(_text_:system in 668) [ClassicSimilarity], result of:
              0.03514745 = score(doc=668,freq=8.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.3479797 = fieldWeight in 668, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.5 = coord(1/2)
        0.010862407 = product of:
          0.021724815 = sum of:
            0.021724815 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.021724815 = score(doc=668,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.19345059 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    20th century massification of higher education and research in academia is said to have produced structurally stratified higher education systems in many countries. Most manifestly, the research mission of universities appears to be divisive. Authors have claimed that the Swedish system, while formally unified, has developed into a binary state, and statistics seem to support this conclusion. This article makes use of a comprehensive statistical data source on Swedish higher education institutions to illustrate stratification, and uses literature on Swedish research policy history to contextualize the statistics. Highlighting the opportunities as well as constraints of the data, the article argues that there is great merit in combining statistics with a qualitative analysis when studying the structural characteristics of national higher education systems. Not least the article shows that it is an over-simplification to describe the Swedish system as binary; the stratification is more complex. On basis of the analysis, the article also argues that while global trends certainly influence national developments, higher education systems have country-specific features that may enrich the understanding of how systems evolve and therefore should be analyzed as part of a broader study of the increasingly globalized academic system.
    Date
    22. 3.2013 19:43:01
  10. Liu, Y.; Zhang, M.; Cen, R.; Ru, L.; Ma, S.: Data cleansing for Web information retrieval using query independent features (2007) 0.01
    0.0053610695 = product of:
      0.042888556 = sum of:
        0.042888556 = weight(_text_:retrieval in 607) [ClassicSimilarity], result of:
          0.042888556 = score(doc=607,freq=14.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.442117 = fieldWeight in 607, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=607)
      0.125 = coord(1/8)
    
    Abstract
    Understanding what kinds of Web pages are the most useful for Web search engine users is a critical task in Web information retrieval (IR). Most previous works used hyperlink analysis algorithms to solve this problem. However, little research has been focused on query-independent Web data cleansing for Web IR. In this paper, we first provide analysis of the differences between retrieval target pages and ordinary ones based on more than 30 million Web pages obtained from both the Text Retrieval Conference (TREC) and a widely used Chinese search engine, SOGOU (www.sogou.com). We further propose a learning-based data cleansing algorithm for reducing Web pages that are unlikely to be useful for user requests. We found that there exists a large proportion of low-quality Web pages in both the English and the Chinese Web page corpus, and retrieval target pages can be identified using query-independent features and cleansing algorithms. The experimental results showed that our algorithm is effective in reducing a large portion of Web pages with a small loss in retrieval target pages. It makes it possible for Web IR tools to meet a large fraction of users' needs with only a small part of pages on the Web. These results may help Web search engines make better use of their limited storage and computation resources to improve search performance.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
  11. Ayadi, H.; Torjmen-Khemakhem, M.; Daoud, M.; Huang, J.X.; Jemaa, M.B.: Mining correlations between medically dependent features and image retrieval models for query classification (2017) 0.01
    0.0053610695 = product of:
      0.042888556 = sum of:
        0.042888556 = weight(_text_:retrieval in 3607) [ClassicSimilarity], result of:
          0.042888556 = score(doc=3607,freq=14.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.442117 = fieldWeight in 3607, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3607)
      0.125 = coord(1/8)
    
    Abstract
    The abundance of medical resources has encouraged the development of systems that allow for efficient searches of information in large medical image data sets. State-of-the-art image retrieval models are classified into three categories: content-based (visual) models, textual models, and combined models. Content-based models use visual features to answer image queries, textual image retrieval models use word matching to answer textual queries, and combined image retrieval models, use both textual and visual features to answer queries. Nevertheless, most of previous works in this field have used the same image retrieval model independently of the query type. In this article, we define a list of generic and specific medical query features and exploit them in an association rule mining technique to discover correlations between query features and image retrieval models. Based on these rules, we propose to use an associative classifier (NaiveClass) to find the best suitable retrieval model given a new textual query. We also propose a second associative classifier (SmartClass) to select the most appropriate default class for the query. Experiments are performed on Medical ImageCLEF queries from 2008 to 2012 to evaluate the impact of the proposed query features on the classification performance. The results show that combining our proposed specific and generic query features is effective in query classification.
  12. Sarnikar, S.; Zhang, Z.; Zhao, J.L.: Query-performance prediction for effective query routing in domain-specific repositories (2014) 0.00
    0.004211573 = product of:
      0.033692583 = sum of:
        0.033692583 = weight(_text_:retrieval in 1326) [ClassicSimilarity], result of:
          0.033692583 = score(doc=1326,freq=6.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.34732026 = fieldWeight in 1326, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1326)
      0.125 = coord(1/8)
    
    Abstract
    The effective use of corporate memory is becoming increasingly important because every aspect of e-business requires access to information repositories. Unfortunately, less-than-satisfying effectiveness in state-of-the-art information-retrieval techniques is well known, even for some of the best search engines such as Google. In this study, the authors resolve this retrieval ineffectiveness problem by developing a new framework for predicting query performance, which is the first step toward better retrieval effectiveness. Specifically, they examine the relationship between query performance and query context. A query context consists of the query itself, the document collection, and the interaction between the two. The authors first analyze the characteristics of query context and develop various features for predicting query performance. Then, they propose a context-sensitive model for predicting query performance based on the characteristics of the query and the document collection. Finally, they validate this model with respect to five real-world collections of documents and demonstrate its utility in routing queries to the correct repository with high accuracy.
  13. Saz, J.T.: Perspectivas en recuperacion y explotacion de informacion electronica : el 'data mining' (1997) 0.00
    0.004052588 = product of:
      0.032420702 = sum of:
        0.032420702 = weight(_text_:retrieval in 3723) [ClassicSimilarity], result of:
          0.032420702 = score(doc=3723,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33420905 = fieldWeight in 3723, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=3723)
      0.125 = coord(1/8)
    
    Footnote
    Übers. des Titels: Perspectives on the retrieval and exploitation of electronic information: data mining
  14. Lam, W.; Yang, C.C.; Menczer, F.: Introduction to the special topic section on mining Web resources for enhancing information retrieval (2007) 0.00
    0.004011857 = product of:
      0.032094855 = sum of:
        0.032094855 = weight(_text_:retrieval in 600) [ClassicSimilarity], result of:
          0.032094855 = score(doc=600,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33085006 = fieldWeight in 600, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=600)
      0.125 = coord(1/8)
    
    Footnote
    Einführung in einen Themenschwerpunkt "Mining Web resources for enhancing information retrieval"
  15. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.00
    0.0039298544 = product of:
      0.015719417 = sum of:
        0.0070294905 = product of:
          0.014058981 = sum of:
            0.014058981 = weight(_text_:system in 1507) [ClassicSimilarity], result of:
              0.014058981 = score(doc=1507,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.13919188 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.5 = coord(1/2)
        0.008689926 = product of:
          0.017379852 = sum of:
            0.017379852 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
              0.017379852 = score(doc=1507,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.15476047 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Date
    22. 4.2003 11:45:36
  16. Budzik, J.; Hammond, K.J.; Birnbaum, L.: Information access in context (2001) 0.00
    0.0038363528 = product of:
      0.030690823 = sum of:
        0.030690823 = product of:
          0.061381646 = sum of:
            0.061381646 = weight(_text_:29 in 3835) [ClassicSimilarity], result of:
              0.061381646 = score(doc=3835,freq=2.0), product of:
                0.11281017 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.032069415 = queryNorm
                0.5441145 = fieldWeight in 3835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3835)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    29. 3.2002 17:31:17
  17. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.00
    0.0038018425 = product of:
      0.03041474 = sum of:
        0.03041474 = product of:
          0.06082948 = sum of:
            0.06082948 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.06082948 = score(doc=4577,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    2. 4.2000 18:01:22
  18. Survey of text mining : clustering, classification, and retrieval (2004) 0.00
    0.003509644 = product of:
      0.028077152 = sum of:
        0.028077152 = weight(_text_:retrieval in 804) [ClassicSimilarity], result of:
          0.028077152 = score(doc=804,freq=6.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.28943354 = fieldWeight in 804, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
      0.125 = coord(1/8)
    
    LCSH
    Data mining ; Information retrieval
    Subject
    Data mining ; Information retrieval
  19. Ku, L.-W.; Chen, H.-H.: Mining opinions from the Web : beyond relevance retrieval (2007) 0.00
    0.003509644 = product of:
      0.028077152 = sum of:
        0.028077152 = weight(_text_:retrieval in 605) [ClassicSimilarity], result of:
          0.028077152 = score(doc=605,freq=6.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.28943354 = fieldWeight in 605, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=605)
      0.125 = coord(1/8)
    
    Abstract
    Documents discussing public affairs, common themes, interesting products, and so on, are reported and distributed on the Web. Positive and negative opinions embedded in documents are useful references and feedbacks for governments to improve their services, for companies to market their products, and for customers to purchase their objects. Web opinion mining aims to extract, summarize, and track various aspects of subjective information on the Web. Mining subjective information enables traditional information retrieval (IR) systems to retrieve more data from human viewpoints and provide information with finer granularity. Opinion extraction identifies opinion holders, extracts the relevant opinion sentences, and decides their polarities. Opinion summarization recognizes the major events embedded in documents and summarizes the supportive and the nonsupportive evidence. Opinion tracking captures subjective information from various genres and monitors the developments of opinions from spatial and temporal dimensions. To demonstrate and evaluate the proposed opinion mining algorithms, news and bloggers' articles are adopted. Documents in the evaluation corpora are tagged in different granularities from words, sentences to documents. In the experiments, positive and negative sentiment words and their weights are mined on the basis of Chinese word structures. The f-measure is 73.18% and 63.75% for verbs and nouns, respectively. Utilizing the sentiment words mined together with topical words, we achieve f-measure 62.16% at the sentence level and 74.37% at the document level.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
  20. Gaizauskas, R.; Wilks, Y.: Information extraction : beyond document retrieval (1998) 0.00
    0.0034387347 = product of:
      0.027509877 = sum of:
        0.027509877 = weight(_text_:retrieval in 4716) [ClassicSimilarity], result of:
          0.027509877 = score(doc=4716,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.2835858 = fieldWeight in 4716, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4716)
      0.125 = coord(1/8)
    
    Abstract
    In this paper we give a synoptic view of the growth of the text processing technology of informatione xtraction (IE) whose function is to extract information about a pre-specified set of entities, relations or events from natural language texts and to record this information in structured representations called templates. Here we describe the nature of the IE task, review the history of the area from its origins in AI work in the 1960s and 70s till the present, discuss the techniques being used to carry out the task, describe application areas where IE systems are or are about to be at work, and conclude with a discussion of the challenges facing the area. What emerges is a picture of an exciting new text processing technology with a host of new applications, both on its own and in conjunction with other technologies, such as information retrieval, machine translation and data mining

Years

Languages

  • e 57
  • d 14
  • sp 1
  • More… Less…

Types

  • a 61
  • m 9
  • s 6
  • el 3
  • More… Less…