Search (9 results, page 1 of 1)

  • × author_ss:"Mostafa, J."
  1. Quiroga, L.M.; Mostafa, J.: ¬An experiment in building profiles in information filtering : the role of context of user relevance feedback (2002) 0.05
    0.045458954 = product of:
      0.11364739 = sum of:
        0.080696784 = weight(_text_:context in 2579) [ClassicSimilarity], result of:
          0.080696784 = score(doc=2579,freq=8.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.45792344 = fieldWeight in 2579, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2579)
        0.032950602 = weight(_text_:system in 2579) [ClassicSimilarity], result of:
          0.032950602 = score(doc=2579,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 2579, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2579)
      0.4 = coord(2/5)
    
    Abstract
    An experiment was conducted to see how relevance feedback could be used to build and adjust profiles to improve the performance of filtering systems. Data was collected during the system interaction of 18 graduate students with SIFTER (Smart Information Filtering Technology for Electronic Resources), a filtering system that ranks incoming information based on users' profiles. The data set came from a collection of 6000 records concerning consumer health. In the first phase of the study, three different modes of profile acquisition were compared. The explicit mode allowed users to directly specify the profile; the implicit mode utilized relevance feedback to create and refine the profile; and the combined mode allowed users to initialize the profile and to continuously refine it using relevance feedback. Filtering performance, measured in terms of Normalized Precision, showed that the three approaches were significantly different ( [small alpha, Greek] =0.05 and p =0.012). The explicit mode of profile acquisition consistently produced superior results. Exclusive reliance on relevance feedback in the implicit mode resulted in inferior performance. The low performance obtained by the implicit acquisition mode motivated the second phase of the study, which aimed to clarify the role of context in relevance feedback judgments. An inductive content analysis of thinking aloud protocols showed dimensions that were highly situational, establishing the importance context plays in feedback relevance assessments. Results suggest the need for better representation of documents, profiles, and relevance feedback mechanisms that incorporate dimensions identified in this research.
    Footnote
    Beitrag in einem Themenheft: "Issues of context in information retrieval (IR)"
  2. Zhang, J.; Mostafa, J.; Tripathy, H.: Information retrieval by semantic analysis and visualization of the concept space of D-Lib® magazine (2002) 0.04
    0.039539397 = product of:
      0.06589899 = sum of:
        0.020174196 = weight(_text_:context in 1211) [ClassicSimilarity], result of:
          0.020174196 = score(doc=1211,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.11448086 = fieldWeight in 1211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1211)
        0.022425208 = weight(_text_:index in 1211) [ClassicSimilarity], result of:
          0.022425208 = score(doc=1211,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.12069881 = fieldWeight in 1211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1211)
        0.023299592 = weight(_text_:system in 1211) [ClassicSimilarity], result of:
          0.023299592 = score(doc=1211,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 1211, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1211)
      0.6 = coord(3/5)
    
    Abstract
    In this article we present a method for retrieving documents from a digital library through a visual interface based on automatically generated concepts. We used a vocabulary generation algorithm to generate a set of concepts for the digital library and a technique called the max-min distance technique to cluster them. Additionally, the concepts were visualized in a spring embedding graph layout to depict the semantic relationship among them. The resulting graph layout serves as an aid to users for retrieving documents. An online archive containing the contents of D-Lib Magazine from July 1995 to May 2002 was used to test the utility of an implemented retrieval and visualization system. We believe that the method developed and tested can be applied to many different domains to help users get a better understanding of online document collections and to minimize users' cognitive load during execution of search tasks. Over the past few years, the volume of information available through the World Wide Web has been expanding exponentially. Never has so much information been so readily available and shared among so many people. Unfortunately, the unstructured nature and huge volume of information accessible over networks have made it hard for users to sift through and find relevant information. To deal with this problem, information retrieval (IR) techniques have gained more intensive attention from both industrial and academic researchers. Numerous IR techniques have been developed to help deal with the information overload problem. These techniques concentrate on mathematical models and algorithms for retrieval. Popular IR models such as the Boolean model, the vector-space model, the probabilistic model and their variants are well established.
    From the user's perspective, however, it is still difficult to use current information retrieval systems. Users frequently have problems expressing their information needs and translating those needs into queries. This is partly due to the fact that information needs cannot be expressed appropriately in systems terms. It is not unusual for users to input search terms that are different from the index terms information systems use. Various methods have been proposed to help users choose search terms and articulate queries. One widely used approach is to incorporate into the information system a thesaurus-like component that represents both the important concepts in a particular subject area and the semantic relationships among those concepts. Unfortunately, the development and use of thesauri is not without its own problems. The thesaurus employed in a specific information system has often been developed for a general subject area and needs significant enhancement to be tailored to the information system where it is to be used. This thesaurus development process, if done manually, is both time consuming and labor intensive. Usage of a thesaurus in searching is complex and may raise barriers for the user. For illustration purposes, let us consider two scenarios of thesaurus usage. In the first scenario the user inputs a search term and the thesaurus then displays a matching set of related terms. Without an overview of the thesaurus - and without the ability to see the matching terms in the context of other terms - it may be difficult to assess the quality of the related terms in order to select the correct term. In the second scenario the user browses the whole thesaurus, which is organized as in an alphabetically ordered list. The problem with this approach is that the list may be long, and neither does it show users the global semantic relationship among all the listed terms.
  3. Mostafa, J.: Bessere Suchmaschinen für das Web (2006) 0.03
    0.031206759 = product of:
      0.052011263 = sum of:
        0.017940165 = weight(_text_:index in 4871) [ClassicSimilarity], result of:
          0.017940165 = score(doc=4871,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.09655905 = fieldWeight in 4871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.015625 = fieldNorm(doc=4871)
        0.018639674 = weight(_text_:system in 4871) [ClassicSimilarity], result of:
          0.018639674 = score(doc=4871,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.13919188 = fieldWeight in 4871, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.015625 = fieldNorm(doc=4871)
        0.015431422 = product of:
          0.023147132 = sum of:
            0.011625857 = weight(_text_:29 in 4871) [ClassicSimilarity], result of:
              0.011625857 = score(doc=4871,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.07773064 = fieldWeight in 4871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4871)
            0.011521274 = weight(_text_:22 in 4871) [ClassicSimilarity], result of:
              0.011521274 = score(doc=4871,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.07738023 = fieldWeight in 4871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4871)
          0.6666667 = coord(2/3)
      0.6 = coord(3/5)
    
    Content
    An der Wurzel des Indexbaums Im ersten Schritt werden potenziell interessante Inhalte identifiziert und fortlaufend gesammelt. Spezielle Programme vom Typ so genannter Webtrawler können im Internet publizierte Seiten ausfindig machen, durchsuchen (inklusive darauf befindlicher Links) und die Seiten an einem Ort gesammelt speichern. Im zweiten Schritt erfasst das System die relevanten Wörter auf diesen Seiten und bestimmt mit statistischen Methoden deren Wichtigkeit. Drittens wird aus den relevanten Begriffen eine hocheffiziente baumartige Datenstruktur erzeugt, die diese Begriffe bestimmten Webseiten zuordnet. Gibt ein Nutzer eine Anfrage ein, wird nur der gesamte Baum - auch Index genannt - durchsucht und nicht jede einzelne Webseite. Die Suche beginnt an der Wurzel des Indexbaums, und bei jedem Suchschritt wird eine Verzweigung des Baums (die jeweils viele Begriffe und zugehörige Webseiten beinhaltet) entweder weiter verfolgt oder als irrelevant verworfen. Dies verkürzt die Suchzeiten dramatisch. Um die relevanten Fundstellen (oder Links) an den Anfang der Ergebnisliste zu stellen, greift der Suchalgorithmus auf verschiedene Sortierstrategien zurück. Eine verbreitete Methode - die Begriffshäufigkeit - untersucht das Vorkommen der Wörter und errechnet daraus numerische Gewichte, welche die Bedeutung der Wörter in den einzelnen Dokumenten repräsentieren. Häufige Wörter (wie »oder«, »zu«, »mit«), die in vielen Dokumenten auftauchen, erhalten deutlich niedrigere Gewichte als Wörter, die eine höhere semantische Relevanz aufweisen und nur in vergleichsweise wenigen Dokumenten zu finden sind. Webseiten können aber auch nach anderen Strategien indiziert werden. Die Linkanalyse beispielsweise untersucht Webseiten nach dem Kriterium, mit welchen anderen Seiten sie verknüpft sind. Dabei wird analysiert, wie viele Links auf eine Seite verweisen und von dieser Seite selbst ausgehen. Google etwa verwendet zur Optimierung der Suchresultate diese Linkanalyse. Sechs Jahre benötigte Google, um sich als führende Suchmaschine zu etablieren. Zum Erfolg trugen vor allem zwei Vorzüge gegenüber der Konkurrenz bei: Zum einen kann Google extrem große Weberawling-Operationen durchführen. Zum anderen liefern seine Indizierungsund Gewichtungsmethoden überragende Ergebnisse. In letzter Zeit jedoch haben andere Suchmaschinen-Entwickler einige neue, ähnlich leistungsfähige oder gar punktuell bessere Systeme entwickelt.
    Viele digitale Inhalte können mit Suchmaschinen nicht erschlossen werden, weil die Systeme, die diese verwalten, Webseiten auf andere Weise speichern, als die Nutzer sie betrachten. Erst durch die Anfrage des Nutzers entsteht die jeweils aktuelle Webseite. Die typischen Webtrawler sind von solchen Seiten überfordert und können deren Inhalte nicht erschließen. Dadurch bleibt ein Großteil der Information - schätzungsweise 500-mal so viel wie das, was das konventionelle Web umfasst - für Anwender verborgen. Doch nun laufen Bemühungen, auch dieses »versteckte Web« ähnlich leicht durchsuchbar zu machen wie seinen bisher zugänglichen Teil. Zu diesem Zweck haben Programmierer eine neuartige Software entwickelt, so genannte Wrapper. Sie macht sich zu Nutze, dass online verfügbare Information standardisierte grammatikalische Strukturen enthält. Wrapper erledigen ihre Arbeit auf vielerlei Weise. Einige nutzen die gewöhnliche Syntax von Suchanfragen und die Standardformate der Online-Quellen, um auf versteckte Inhalte zuzugreifen. Andere verwenden so genannte ApplikationsprogrammSchnittstellen (APIs), die Software in die Lage versetzen, standardisierte Operationen und Befehle auszuführen. Ein Beispiel für ein Programm, das auf versteckte Netzinhalte zugreifen kann, ist der von BrightPlanet entwickelte »Deep Query Manager«. Dieser wrapperbasierte Anfragemanager stellt Portale und Suchmasken für mehr als 70 000 versteckte Webquellen bereit. Wenn ein System zur Erzeugung der Rangfolge Links oder Wörter nutzt, ohne dabei zu berücksichtigen, welche Seitentypen miteinander verglichen werden, besteht die Gefahr des Spoofing: Spaßvögel oder Übeltäter richten Webseiten mit geschickt gewählten Wörtern gezielt ein, um das Rangberechnungssystem in die Irre zu führen. Noch heute liefert die Anfrage nach »miserable failure« (»klägliches Versagen«) an erster Stelle eine offizielle Webseite des Weißen Hauses mit der Biografie von Präsident Bush.
    Vorsortiert und radförmig präsentiert Statt einfach nur die gewichtete Ergebnisliste zu präsentieren (die relativ leicht durch Spoofing manipuliert werden kann), versuchen einige Suchmaschinen, unter denjenigen Webseiten, die am ehesten der Anfrage entsprechen, Ähnlichkeiten und Unterschiede zu finden und die Ergebnisse in Gruppen unterteilt darzustellen. Diese Muster können Wörter sein, Synonyme oder sogar übergeordnete Themenbereiche, die nach speziellen Regeln ermittelt werden. Solche Systeme ordnen jeder gefundenen Linkgruppe einen charakteristischen Begriff zu. Der Anwender kann die Suche dann weiter verfeinern, indem er eine Untergruppe von Ergebnissen auswählt. So liefern etwa die Suchmaschinen »Northern Light« (der Pionier auf diesem Gebiet) und »Clusty« nach Gruppen (Clustern) geordnete Ergebnisse. »Mooter«, eine innovative Suchmaschine, die ebenfalls diese Gruppiertechnik verwendet, stellt die Gruppen zudem grafisch dar (siehe Grafik links unten). Das System ordnet die UntergruppenButtons radförmig um einen zentralen Button an, der sämtliche Ergebnisse enthält. Ein Klick auf die UntergruppenButtons erzeugt Listen relevanter Links und zeigt neue, damit zusammenhängende Gruppen. Mooter erinnert sich daran, welche Untergruppen gewählt wurden. Noch genauere Ergebnisse erhält der Nutzer, wenn er die Verfeinerungsoption wählt: Sie kombiniert bei früheren Suchen ausgewählte Gruppen mit der aktuellen Anfrage. Ein ähnliches System, das ebenfalls visuelle Effekte nutzt, ist »Kartoo«. Es handelt sich dabei um eine so genannte Meta-Suchmaschine: Sie gibt die Nutzeranfragen an andere Suchmaschinen weiter und präsentiert die gesammelten Ergebnisse in grafischer Form. Kartoo liefert eine Liste von Schlüsselbegriffen von den unterschiedlichen Webseiten und generiert daraus eine »Landkarte«. Auf ihr werden wichtige Seiten als kons (Symbole) dargestellt und Bezüge zwischen den Seiten mit Labeln und Pfaden versehen. Jedes Label lässt sich zur weiteren Verfeinerung der Suche nutzen. Einige neue Computertools erweitern die Suche dadurch, dass sie nicht nur das Web durchforsten, sondern auch die Festplatte des eigenen Rechners. Zurzeit braucht man dafür noch eigenständige Programme. Aber Google hat beispielsweise kürzlich seine »Desktop Search« angekündigt, die zwei Funktionen kombiniert: Der Anwender kann angeben, ob das Internet, die Festplatte oder beides zusammen durchsucht werden soll. Die nächste Version von Microsoft Windows (Codename »Longhorn«) soll mit ähnlichen Fähigkeiten ausgestattet werden: Longhorn soll die implizite Suche beherrschen, bei der Anwender ohne Eingabe spezifischer Anfragen relevante Informationen auffinden können. (Dabei werden Techniken angewandt, die in einem anderen Microsoft-Projekt namens »Stuff I've seen« - »Sachen, die ich gesehen habe« - entwickelt wurden.) Bei der impliziten Suche werden Schlüsselwörter aus der Textinformation gewonnen, die der Anwender in jüngster Zeit auf dem Rechner verarbeitet oder verändert hat - etwa E-Mails oder Word-Dokumente -, um damit auf der Festplatte gespeicherte Informationen wiederzufinden. Möglicherweise wird Microsoft diese Suchfunktion auch auf Webseiten ausdehnen. Außerdem sollen Anwender auf dem Bildschirm gezeigte Textinhalte leichter in Suchanfragen umsetzen können." ...
    Date
    31.12.1996 19:29:41
    22. 1.2006 18:34:49
  4. Mostafa, J.; Quiroga, L.M.; Palakal, M.: Filtering medical documents using automated and human classification methods (1998) 0.03
    0.030551035 = product of:
      0.076377586 = sum of:
        0.04841807 = weight(_text_:context in 2326) [ClassicSimilarity], result of:
          0.04841807 = score(doc=2326,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 2326, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=2326)
        0.027959513 = weight(_text_:system in 2326) [ClassicSimilarity], result of:
          0.027959513 = score(doc=2326,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 2326, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2326)
      0.4 = coord(2/5)
    
    Abstract
    The goal of this research is to clarify the role of document classification in information filtering. An important function of classification, in managing computational complexity, is described and illustrated in the context of an existing filtering system. A parameter called classification homogeneity is presented for analyzing unsupervised automated classification by employing human classification as a control. 2 significant components of the automated classification approach, vocabulary discovery and classification scheme generation, are described in detail. Results of classification performance revealed considerable variability in the homogeneity of automatically produced classes. Based on the classification performance, different types of interest profiles were created. Subsequently, these profiles were used to perform filtering sessions. The filtering results showed that with increasing homogeneity, filtering performance improves, and, conversely, with decreasing homogeneity, filtering performance degrades
  5. Lam, W.; Mostafa, J.: Modeling user interest shift using a Baysian approach (2001) 0.02
    0.023877738 = product of:
      0.059694342 = sum of:
        0.04613084 = weight(_text_:system in 2658) [ClassicSimilarity], result of:
          0.04613084 = score(doc=2658,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.34448233 = fieldWeight in 2658, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2658)
        0.013563501 = product of:
          0.0406905 = sum of:
            0.0406905 = weight(_text_:29 in 2658) [ClassicSimilarity], result of:
              0.0406905 = score(doc=2658,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.27205724 = fieldWeight in 2658, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2658)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    We investigate the modeling of changes in user interest in information filtering systems. A new technique for tracking user interest shifts based on a Bayesian approach is developed. The interest tracker is integrated into a profile learning module of a filtering system. We present an analytical study to establish the rate of convergence for the profile learning with and without the user interest tracking component. We examine the relationship among degree of shift, cost of detection error, and time needed for detection. To study the effect of different patterns of interest shift on system performance we also conducted several filtering experiments. Generally, the findings show that the Bayesian approach is a feasible and effective technique for modeling user interest shift
    Date
    29. 9.2001 13:58:28
  6. Zhang, Y.; Wu, D.; Hagen, L.; Song, I.-Y.; Mostafa, J.; Oh, S.; Anderson, T.; Shah, C.; Bishop, B.W.; Hopfgartner, F.; Eckert, K.; Federer, L.; Saltz, J.S.: Data science curriculum in the iField (2023) 0.02
    0.020014644 = product of:
      0.05003661 = sum of:
        0.040348392 = weight(_text_:context in 964) [ClassicSimilarity], result of:
          0.040348392 = score(doc=964,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=964)
        0.009688215 = product of:
          0.029064644 = sum of:
            0.029064644 = weight(_text_:29 in 964) [ClassicSimilarity], result of:
              0.029064644 = score(doc=964,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=964)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Many disciplines, including the broad Field of Information (iField), offer Data Science (DS) programs. There have been significant efforts exploring an individual discipline's identity and unique contributions to the broader DS education landscape. To advance DS education in the iField, the iSchool Data Science Curriculum Committee (iDSCC) was formed and charged with building and recommending a DS education framework for iSchools. This paper reports on the research process and findings of a series of studies to address important questions: What is the iField identity in the multidisciplinary DS education landscape? What is the status of DS education in iField schools? What knowledge and skills should be included in the core curriculum for iField DS education? What are the jobs available for DS graduates from the iField? What are the differences between graduate-level and undergraduate-level DS education? Answers to these questions will not only distinguish an iField approach to DS education but also define critical components of DS curriculum. The results will inform individual DS programs in the iField to develop curriculum to support undergraduate and graduate DS education in their local context.
    Date
    12. 5.2023 14:29:42
  7. Sugimoto, C.R.; Mostafa, J.: ¬A note of concern and context : on careful use of terminologies (2018) 0.02
    0.01936723 = product of:
      0.09683614 = sum of:
        0.09683614 = weight(_text_:context in 7278) [ClassicSimilarity], result of:
          0.09683614 = score(doc=7278,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.54950815 = fieldWeight in 7278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.09375 = fieldNorm(doc=7278)
      0.2 = coord(1/5)
    
  8. Mostafa, J.: Digital image representation and access (1994) 0.02
    0.018473173 = product of:
      0.04618293 = sum of:
        0.03261943 = weight(_text_:system in 1102) [ClassicSimilarity], result of:
          0.03261943 = score(doc=1102,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 1102, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1102)
        0.013563501 = product of:
          0.0406905 = sum of:
            0.0406905 = weight(_text_:29 in 1102) [ClassicSimilarity], result of:
              0.0406905 = score(doc=1102,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.27205724 = fieldWeight in 1102, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1102)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    State of the art review of techniques used to generate, store and retrieval digital images. Explains basic terms and concepts related to image representation and describes the differences between bilevel, greyscale, and colour images. Introduces additional image related data, specifically colour standards, correction values, resolution parameters and lookup tables. Illustrates the use of data compression techniques and various image data formats that have been used. Identifies 4 branches of imaging research related to dtaa indexing and modelling: verbal indexing; visual surrogates; image indexing; and data structures. Concludes with a discussion of the state of the art in networking technology with consideration of image distribution, local system requirements and data integrity
    Source
    Annual review of information science and technology. 29(1994), S.91-135
  9. Mostafa, J.: Document search interface design : background and introduction to special topic section (2004) 0.01
    0.009683615 = product of:
      0.04841807 = sum of:
        0.04841807 = weight(_text_:context in 2503) [ClassicSimilarity], result of:
          0.04841807 = score(doc=2503,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 2503, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=2503)
      0.2 = coord(1/5)
    
    Abstract
    A library user searching for high-quality and authoritative information today is confronted with thousands of resources that cover a wide variety of topics. The heterogeneity factor alone can be a major obstacle for the user to select appropriate resources to search. Depending an the information need, the user may have to navigate among resources that are in different formats (bibliographic versus full-text), are stored in different media (text versus images), have different levels of coverage (news versus scholarly reports), or are published in different languages. Beyond the heterogeneity factor, the user faces specific challenges related to the search experience itself. These factors and their impact an searching can be best described using a fourphase framework, namely: formulation, action, presentation, and refinement (Shneiderman, Byrd, & Croft, 1998). Certain key functions for document search interfaces are described below in the context of these four phases. Following the description, highlights from the contributed papers are discussed.