Search (9 results, page 1 of 1)

  • × author_ss:"Mostafa, J."
  • × year_i:[2000 TO 2010}
  1. Mukhopadhyay, S.; Peng, S.; Raje, R.; Mostafa, J.; Palakal, M.: Distributed multi-agent information filtering : a comparative study (2005) 0.01
    0.007724557 = product of:
      0.030898228 = sum of:
        0.030898228 = weight(_text_:information in 3559) [ClassicSimilarity], result of:
          0.030898228 = score(doc=3559,freq=18.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.34911853 = fieldWeight in 3559, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3559)
      0.25 = coord(1/4)
    
    Abstract
    Information filtering is a technique to identify, in large collections, information that is relevant according to some criteria (e.g., a user's personal interests, or a research project objective). As such, it is a key technology for providing efficient user services in any large-scale information infrastructure, e.g., digital libraries. To provide large-scale Information filtering services, both computational and knowledge management issues need to be addressed. A centralized (single-agent) approach to information filtering suffers from serious drawbacks in terms of speed, accuracy, and economic considerations, and becomes unrealistic even for medium-scale applications. In this article, we discuss two distributed (multiagent) information filtering approaches, that are distributed with respect to knowledge or functionality, to overcome the limitations of single-agent centralized information filtering. Large-scale experimental studies involving the weIl-known TREC data set are also presented to illustrate the advantages of distributed filtering as weIl as to compare the different distributed approaches.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.8, S.834-842
  2. Mostafa, J.: Bessere Suchmaschinen für das Web (2006) 0.01
    0.007620028 = product of:
      0.015240056 = sum of:
        0.008409433 = weight(_text_:information in 4871) [ClassicSimilarity], result of:
          0.008409433 = score(doc=4871,freq=12.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09501803 = fieldWeight in 4871, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=4871)
        0.0068306234 = product of:
          0.013661247 = sum of:
            0.013661247 = weight(_text_:22 in 4871) [ClassicSimilarity], result of:
              0.013661247 = score(doc=4871,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.07738023 = fieldWeight in 4871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4871)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    "Seit wenigen Jahren haben Suchmaschinen die Recherche im Internet revolutioniert. Statt in Büchereien zu gehen, um dort mühsam etwas nachzuschlagen, erhalten wir die gewünschten Dokumente heute mit ein paar Tastaturanschlägen und Mausklicks. »Googeln«, nach dem Namen der weltweit dominierenden Suchmaschine, ist zum Synonym für die Online-Recherche geworden. Künftig werden verbesserte Suchmaschinen die gewünschten Informationen sogar noch zielsicherer aufspüren. Die neuen Programme dringen dazu tiefer in die Online-Materie ein. Sie sortieren und präsentieren ihre Ergebnisse besser, und zur Optimierung der Suche merken sie sich die persönlichen Präferenzen der Nutzer, die sie in vorherigen Anfragen ermittelt haben. Zudem erweitern sie den inhaltlichen Horizont, da sie mehr leisten, als nur eingetippte Schlüsselwörter zu verarbeiten. Einige der neuen Systeme berücksichtigen automatisch, an welchem Ort die Anfrage gestellt wurde. Dadurch kann beispielsweise ein PDA (Personal Digital Assistant) über seine Funknetzverbindung das nächstgelegene Restaurant ausfindig machen. Auch Bilder spüren die neuen Suchmaschinen besser auf, indem sie Vorlagen mit ähnlichen, bereits abgespeicherten Mustern vergleichen. Sie können sogar den Namen eines Musikstücks herausfinden, wenn man ihnen nur ein paar Takte daraus vorsummt. Heutige Suchmaschinen basieren auf den Erkenntnissen aus dem Bereich des information retrieval (Wiederfinden von Information), mit dem sich Computerwissenschaftler schon seit über 50 Jahren befassen. Bereits 1966 schrieb Ben Ami Lipetz im Scientific American einen Artikel über das »Speichern und Wiederfinden von Information«. Damalige Systeme konnten freilich nur einfache Routine- und Büroanfragen bewältigen. Lipetz zog den hellsichtigen Schluss, dass größere Durchbrüche im information retrieval erst dann erreichbar sind, wenn Forscher die Informationsverarbeitung im menschlichen Gehirn besser verstanden haben und diese Erkenntnisse auf Computer übertragen. Zwar können Computer dabei auch heute noch nicht mit Menschen mithalten, aber sie berücksichtigen bereits weit besser die persönlichen Interessen, Gewohnheiten und Bedürfnisse ihrer Nutzer. Bevor wir uns neuen Entwicklungen bei den Suchmaschinen zuwenden, ist es hilfreich, sich ein Bild davon zu machen, wie die bisherigen funktionieren: Was genau ist passiert, wenn »Google« auf dem Bildschirm meldet, es habe in 0,32 Sekunden einige Milliarden Dokumente durchsucht? Es würde wesentlich länger dauern, wenn dabei die Schlüsselwörter der Anfrage nacheinander mit den Inhalten all dieser Webseiten verglichen werden müssten. Um lange Suchzeiten zu vermeiden, führen die Suchmaschinen viele ihrer Kernoperationen bereits lange vor dem Zeitpunkt der Nutzeranfrage aus.
    Viele digitale Inhalte können mit Suchmaschinen nicht erschlossen werden, weil die Systeme, die diese verwalten, Webseiten auf andere Weise speichern, als die Nutzer sie betrachten. Erst durch die Anfrage des Nutzers entsteht die jeweils aktuelle Webseite. Die typischen Webtrawler sind von solchen Seiten überfordert und können deren Inhalte nicht erschließen. Dadurch bleibt ein Großteil der Information - schätzungsweise 500-mal so viel wie das, was das konventionelle Web umfasst - für Anwender verborgen. Doch nun laufen Bemühungen, auch dieses »versteckte Web« ähnlich leicht durchsuchbar zu machen wie seinen bisher zugänglichen Teil. Zu diesem Zweck haben Programmierer eine neuartige Software entwickelt, so genannte Wrapper. Sie macht sich zu Nutze, dass online verfügbare Information standardisierte grammatikalische Strukturen enthält. Wrapper erledigen ihre Arbeit auf vielerlei Weise. Einige nutzen die gewöhnliche Syntax von Suchanfragen und die Standardformate der Online-Quellen, um auf versteckte Inhalte zuzugreifen. Andere verwenden so genannte ApplikationsprogrammSchnittstellen (APIs), die Software in die Lage versetzen, standardisierte Operationen und Befehle auszuführen. Ein Beispiel für ein Programm, das auf versteckte Netzinhalte zugreifen kann, ist der von BrightPlanet entwickelte »Deep Query Manager«. Dieser wrapperbasierte Anfragemanager stellt Portale und Suchmasken für mehr als 70 000 versteckte Webquellen bereit. Wenn ein System zur Erzeugung der Rangfolge Links oder Wörter nutzt, ohne dabei zu berücksichtigen, welche Seitentypen miteinander verglichen werden, besteht die Gefahr des Spoofing: Spaßvögel oder Übeltäter richten Webseiten mit geschickt gewählten Wörtern gezielt ein, um das Rangberechnungssystem in die Irre zu führen. Noch heute liefert die Anfrage nach »miserable failure« (»klägliches Versagen«) an erster Stelle eine offizielle Webseite des Weißen Hauses mit der Biografie von Präsident Bush.
    Date
    22. 1.2006 18:34:49
  3. Mukhopadhyay, S.; Peng, S.; Raje, R.; Palakal, M.; Mostafa, J.: Multi-agent information classification using dynamic acquaintance lists (2003) 0.01
    0.005757545 = product of:
      0.02303018 = sum of:
        0.02303018 = weight(_text_:information in 1755) [ClassicSimilarity], result of:
          0.02303018 = score(doc=1755,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.2602176 = fieldWeight in 1755, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1755)
      0.25 = coord(1/4)
    
    Abstract
    There has been considerable interest in recent years in providing automated information services, such as information classification, by means of a society of collaborative agents. These agents augment each other's knowledge structures (e.g., the vocabularies) and assist each other in providing efficient information services to a human user. However, when the number of agents present in the society increases, exhaustive communication and collaboration among agents result in a [arge communication overhead and increased delays in response time. This paper introduces a method to achieve selective interaction with a relatively small number of potentially useful agents, based an simple agent modeling and acquaintance lists. The key idea presented here is that the acquaintance list of an agent, representing a small number of other agents to be collaborated with, is dynamically adjusted. The best acquaintances are automatically discovered using a learning algorithm, based an the past history of collaboration. Experimental results are presented to demonstrate that such dynamically learned acquaintance lists can lead to high quality of classification, while significantly reducing the delay in response time.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.10, S.966-975
  4. Mongin, L.; Fu, Y.Y.; Mostafa, J.: Open Archives data Service prototype and automated subject indexing using D-Lib archive content as a testbed (2003) 0.01
    0.005757545 = product of:
      0.02303018 = sum of:
        0.02303018 = weight(_text_:information in 1167) [ClassicSimilarity], result of:
          0.02303018 = score(doc=1167,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.2602176 = fieldWeight in 1167, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1167)
      0.25 = coord(1/4)
    
    Abstract
    The Indiana University School of Library and Information Science opened a new research laboratory in January 2003; The Indiana University School of Library and Information Science Information Processing Laboratory [IU IP Lab]. The purpose of the new laboratory is to facilitate collaboration between scientists in the department in the areas of information retrieval (IR) and information visualization (IV) research. The lab has several areas of focus. These include grid and cluster computing, and a standard Java-based software platform to support plug and play research datasets, a selection of standard IR modules and standard IV algorithms. Future development includes software to enable researchers to contribute datasets, IR algorithms, and visualization algorithms into the standard environment. We decided early on to use OAI-PMH as a resource discovery tool because it is consistent with our mission.
  5. Quiroga, L.M.; Mostafa, J.: ¬An experiment in building profiles in information filtering : the role of context of user relevance feedback (2002) 0.00
    0.004797954 = product of:
      0.019191816 = sum of:
        0.019191816 = weight(_text_:information in 2579) [ClassicSimilarity], result of:
          0.019191816 = score(doc=2579,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.21684799 = fieldWeight in 2579, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2579)
      0.25 = coord(1/4)
    
    Abstract
    An experiment was conducted to see how relevance feedback could be used to build and adjust profiles to improve the performance of filtering systems. Data was collected during the system interaction of 18 graduate students with SIFTER (Smart Information Filtering Technology for Electronic Resources), a filtering system that ranks incoming information based on users' profiles. The data set came from a collection of 6000 records concerning consumer health. In the first phase of the study, three different modes of profile acquisition were compared. The explicit mode allowed users to directly specify the profile; the implicit mode utilized relevance feedback to create and refine the profile; and the combined mode allowed users to initialize the profile and to continuously refine it using relevance feedback. Filtering performance, measured in terms of Normalized Precision, showed that the three approaches were significantly different ( [small alpha, Greek] =0.05 and p =0.012). The explicit mode of profile acquisition consistently produced superior results. Exclusive reliance on relevance feedback in the implicit mode resulted in inferior performance. The low performance obtained by the implicit acquisition mode motivated the second phase of the study, which aimed to clarify the role of context in relevance feedback judgments. An inductive content analysis of thinking aloud protocols showed dimensions that were highly situational, establishing the importance context plays in feedback relevance assessments. Results suggest the need for better representation of documents, profiles, and relevance feedback mechanisms that incorporate dimensions identified in this research.
    Footnote
    Beitrag in einem Themenheft: "Issues of context in information retrieval (IR)"
    Source
    Information processing and management. 38(2002) no.5, S.671-694
  6. Zhang, J.; Mostafa, J.; Tripathy, H.: Information retrieval by semantic analysis and visualization of the concept space of D-Lib® magazine (2002) 0.00
    0.004551739 = product of:
      0.018206956 = sum of:
        0.018206956 = weight(_text_:information in 1211) [ClassicSimilarity], result of:
          0.018206956 = score(doc=1211,freq=36.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20572007 = fieldWeight in 1211, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1211)
      0.25 = coord(1/4)
    
    Abstract
    In this article we present a method for retrieving documents from a digital library through a visual interface based on automatically generated concepts. We used a vocabulary generation algorithm to generate a set of concepts for the digital library and a technique called the max-min distance technique to cluster them. Additionally, the concepts were visualized in a spring embedding graph layout to depict the semantic relationship among them. The resulting graph layout serves as an aid to users for retrieving documents. An online archive containing the contents of D-Lib Magazine from July 1995 to May 2002 was used to test the utility of an implemented retrieval and visualization system. We believe that the method developed and tested can be applied to many different domains to help users get a better understanding of online document collections and to minimize users' cognitive load during execution of search tasks. Over the past few years, the volume of information available through the World Wide Web has been expanding exponentially. Never has so much information been so readily available and shared among so many people. Unfortunately, the unstructured nature and huge volume of information accessible over networks have made it hard for users to sift through and find relevant information. To deal with this problem, information retrieval (IR) techniques have gained more intensive attention from both industrial and academic researchers. Numerous IR techniques have been developed to help deal with the information overload problem. These techniques concentrate on mathematical models and algorithms for retrieval. Popular IR models such as the Boolean model, the vector-space model, the probabilistic model and their variants are well established.
    From the user's perspective, however, it is still difficult to use current information retrieval systems. Users frequently have problems expressing their information needs and translating those needs into queries. This is partly due to the fact that information needs cannot be expressed appropriately in systems terms. It is not unusual for users to input search terms that are different from the index terms information systems use. Various methods have been proposed to help users choose search terms and articulate queries. One widely used approach is to incorporate into the information system a thesaurus-like component that represents both the important concepts in a particular subject area and the semantic relationships among those concepts. Unfortunately, the development and use of thesauri is not without its own problems. The thesaurus employed in a specific information system has often been developed for a general subject area and needs significant enhancement to be tailored to the information system where it is to be used. This thesaurus development process, if done manually, is both time consuming and labor intensive. Usage of a thesaurus in searching is complex and may raise barriers for the user. For illustration purposes, let us consider two scenarios of thesaurus usage. In the first scenario the user inputs a search term and the thesaurus then displays a matching set of related terms. Without an overview of the thesaurus - and without the ability to see the matching terms in the context of other terms - it may be difficult to assess the quality of the related terms in order to select the correct term. In the second scenario the user browses the whole thesaurus, which is organized as in an alphabetically ordered list. The problem with this approach is that the list may be long, and neither does it show users the global semantic relationship among all the listed terms.
    Nevertheless, because thesaurus use has shown to improve retrieval, for our method we integrate functions in the search interface that permit users to explore built-in search vocabularies to improve retrieval from digital libraries. Our method automatically generates the terms and their semantic relationships representing relevant topics covered in a digital library. We call these generated terms the "concepts", and the generated terms and their semantic relationships we call the "concept space". Additionally, we used a visualization technique to display the concept space and allow users to interact with this space. The automatically generated term set is considered to be more representative of subject area in a corpus than an "externally" imposed thesaurus, and our method has the potential of saving a significant amount of time and labor for those who have been manually creating thesauri as well. Information visualization is an emerging discipline and developed very quickly in the last decade. With growing volumes of documents and associated complexities, information visualization has become increasingly important. Researchers have found information visualization to be an effective way to use and understand information while minimizing a user's cognitive load. Our work was based on an algorithmic approach of concept discovery and association. Concepts are discovered using an algorithm based on an automated thesaurus generation procedure. Subsequently, similarities among terms are computed using the cosine measure, and the associations among terms are established using a method known as max-min distance clustering. The concept space is then visualized in a spring embedding graph, which roughly shows the semantic relationships among concepts in a 2-D visual representation. The semantic space of the visualization is used as a medium for users to retrieve the desired documents. In the remainder of this article, we present our algorithmic approach of concept generation and clustering, followed by description of the visualization technique and interactive interface. The paper ends with key conclusions and discussions on future work.
  7. Mostafa, J.: Document search interface design : background and introduction to special topic section (2004) 0.00
    0.0044597755 = product of:
      0.017839102 = sum of:
        0.017839102 = weight(_text_:information in 2503) [ClassicSimilarity], result of:
          0.017839102 = score(doc=2503,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20156369 = fieldWeight in 2503, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2503)
      0.25 = coord(1/4)
    
    Abstract
    A library user searching for high-quality and authoritative information today is confronted with thousands of resources that cover a wide variety of topics. The heterogeneity factor alone can be a major obstacle for the user to select appropriate resources to search. Depending an the information need, the user may have to navigate among resources that are in different formats (bibliographic versus full-text), are stored in different media (text versus images), have different levels of coverage (news versus scholarly reports), or are published in different languages. Beyond the heterogeneity factor, the user faces specific challenges related to the search experience itself. These factors and their impact an searching can be best described using a fourphase framework, namely: formulation, action, presentation, and refinement (Shneiderman, Byrd, & Croft, 1998). Certain key functions for document search interfaces are described below in the context of these four phases. Following the description, highlights from the contributed papers are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.10, S.869-872
  8. Lam, W.; Mostafa, J.: Modeling user interest shift using a Baysian approach (2001) 0.00
    0.00424829 = product of:
      0.01699316 = sum of:
        0.01699316 = weight(_text_:information in 2658) [ClassicSimilarity], result of:
          0.01699316 = score(doc=2658,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1920054 = fieldWeight in 2658, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2658)
      0.25 = coord(1/4)
    
    Abstract
    We investigate the modeling of changes in user interest in information filtering systems. A new technique for tracking user interest shifts based on a Bayesian approach is developed. The interest tracker is integrated into a profile learning module of a filtering system. We present an analytical study to establish the rate of convergence for the profile learning with and without the user interest tracking component. We examine the relationship among degree of shift, cost of detection error, and time needed for detection. To study the effect of different patterns of interest shift on system performance we also conducted several filtering experiments. Generally, the findings show that the Bayesian approach is a feasible and effective technique for modeling user interest shift
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.5, S.416-429
  9. Seki, K.; Mostafa, J.: Gene ontology annotation as text categorization : an empirical study (2008) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 2123) [ClassicSimilarity], result of:
          0.008582841 = score(doc=2123,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 2123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2123)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 44(2008) no.5, S.1754-1770