Search (23 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. Schulzki-Haddouti, C.; Brückner, A.: ¬Die Suche nach dem Sinn : Automatische Inhaltsanalyse nicht nur für Geheimdienste (2001) 0.01
    0.013936955 = product of:
      0.19511735 = sum of:
        0.19511735 = weight(_text_:beteiligung in 3133) [ClassicSimilarity], result of:
          0.19511735 = score(doc=3133,freq=2.0), product of:
            0.2379984 = queryWeight, product of:
              7.4202213 = idf(docFreq=71, maxDocs=44218)
              0.0320743 = queryNorm
            0.81982636 = fieldWeight in 3133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.4202213 = idf(docFreq=71, maxDocs=44218)
              0.078125 = fieldNorm(doc=3133)
      0.071428575 = coord(1/14)
    
    Abstract
    Die Geheimdienste stehen vor einer Informationsflut, die sie mit herkömmlichen Mitteln nicht bewältigen können. Neue Möglichkeiten, die in Software-Projekten unter BND-Beteiligung entstanden, sollen das Defizit beseitigen und beliebig verknüpfte Informationen der analyse zugänglich machen
  2. Wilson, M.J.; Wilson, M.L.: ¬A comparison of techniques for measuring sensemaking and learning within participant-generated summaries (2013) 0.01
    0.011354187 = product of:
      0.0794793 = sum of:
        0.035931468 = weight(_text_:open in 612) [ClassicSimilarity], result of:
          0.035931468 = score(doc=612,freq=2.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.24876907 = fieldWeight in 612, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=612)
        0.04354783 = weight(_text_:source in 612) [ClassicSimilarity], result of:
          0.04354783 = score(doc=612,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.27386856 = fieldWeight in 612, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0390625 = fieldNorm(doc=612)
      0.14285715 = coord(2/14)
    
    Abstract
    While it is easy to identify whether someone has found a piece of information during a search task, it is much harder to measure how much someone has learned during the search process. Searchers who are learning often exhibit exploratory behaviors, and so current research is often focused on improving support for exploratory search. Consequently, we need effective measures of learning to demonstrate better support for exploratory search. Some approaches, such as quizzes, measure recall when learning from a fixed source of information. This research, however, focuses on techniques for measuring open-ended learning, which often involve analyzing handwritten summaries produced by participants after a task. There are two common techniques for analyzing such summaries: (a) counting facts and statements and (b) judging topic coverage. Both of these techniques, however, can be easily confounded by simple variables such as summary length. This article presents a new technique that measures depth of learning within written summaries based on Bloom's taxonomy (B.S. Bloom & M.D. Engelhart, 1956). This technique was generated using grounded theory and is designed to be less susceptible to such confounding variables. Together, these three categories of measure were compared by applying them to a large collection of written summaries produced in a task-based study, and our results provide insights into each of their strengths and weaknesses. Both fact-to-statement ratio and our own measure of depth of learning were effective while being less affected by confounding variables. Recommendations and clear areas of future work are provided to help continued research into supporting sensemaking and learning.
  3. Bertola, F.; Patti, V.: Ontology-based affective models to organize artworks in the social semantic web (2016) 0.01
    0.010890559 = product of:
      0.07623391 = sum of:
        0.04354783 = weight(_text_:source in 2669) [ClassicSimilarity], result of:
          0.04354783 = score(doc=2669,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.27386856 = fieldWeight in 2669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
        0.03268608 = weight(_text_:web in 2669) [ClassicSimilarity], result of:
          0.03268608 = score(doc=2669,freq=6.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.3122631 = fieldWeight in 2669, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
      0.14285715 = coord(2/14)
    
    Abstract
    In this paper, we focus on applying sentiment analysis to resources from online art collections, by exploiting, as information source, tags intended as textual traces that visitors leave to comment artworks on social platforms. We present a framework where methods and tools from a set of disciplines, ranging from Semantic and Social Web to Natural Language Processing, provide us the building blocks for creating a semantic social space to organize artworks according to an ontology of emotions. The ontology is inspired by the Plutchik's circumplex model, a well-founded psychological model of human emotions. Users can be involved in the creation of the emotional space, through a graphical interactive interface. The development of such semantic space enables new ways of accessing and exploring art collections. The affective categorization model and the emotion detection output are encoded into W3C ontology languages. This gives us the twofold advantage to enable tractable reasoning on detected emotions and related artworks, and to foster the interoperability and integration of tools developed in the Semantic Web and Linked Data community. The proposal has been evaluated against a real-word case study, a dataset of tagged multimedia artworks from the ArsMeteo Italian online collection, and validated through a user study.
  4. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.0106961215 = product of:
      0.074872844 = sum of:
        0.057490345 = weight(_text_:open in 251) [ClassicSimilarity], result of:
          0.057490345 = score(doc=251,freq=2.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.39803052 = fieldWeight in 251, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0625 = fieldNorm(doc=251)
        0.017382499 = product of:
          0.034764998 = sum of:
            0.034764998 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.034764998 = score(doc=251,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Date
    22. 5.2021 12:43:05
    Source
    Open Password. 2021, Nr.947 vom 14.07.2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzMxOCwiNjczMmIwMzRlMDdmIiwwLDAsMjg4LDFd]
  5. Riesthuis, G.J.A.; Stuurman, P.: Tendenzen in de onderwerpsontsluiting : T.1: Inhoudsanalyse (1989) 0.01
    0.0071862936 = product of:
      0.1006081 = sum of:
        0.1006081 = weight(_text_:open in 1841) [ClassicSimilarity], result of:
          0.1006081 = score(doc=1841,freq=2.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.6965534 = fieldWeight in 1841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.109375 = fieldNorm(doc=1841)
      0.071428575 = coord(1/14)
    
    Source
    Open. 21(1989) no.6, S.214-219
  6. Früh, W.: Inhaltsanalyse (2001) 0.01
    0.0067287693 = product of:
      0.094202764 = sum of:
        0.094202764 = weight(_text_:medien in 1751) [ClassicSimilarity], result of:
          0.094202764 = score(doc=1751,freq=8.0), product of:
            0.15096188 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0320743 = queryNorm
            0.62401694 = fieldWeight in 1751, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.046875 = fieldNorm(doc=1751)
      0.071428575 = coord(1/14)
    
    Classification
    AP 13550 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Theorie und Methodik / Grundlagen, Methodik, Theorie
    AP 13500 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Theorie und Methodik / Allgemeines
    RVK
    AP 13550 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Theorie und Methodik / Grundlagen, Methodik, Theorie
    AP 13500 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Theorie und Methodik / Allgemeines
  7. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.00
    0.0046638763 = product of:
      0.032647133 = sum of:
        0.026128696 = weight(_text_:source in 2293) [ClassicSimilarity], result of:
          0.026128696 = score(doc=2293,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.16432112 = fieldWeight in 2293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.006518437 = product of:
          0.013036874 = sum of:
            0.013036874 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
              0.013036874 = score(doc=2293,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.116070345 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
  8. Diao, J.: Conceptualizations of catalogers' judgment through content analysis : a preliminary investigation (2018) 0.00
    0.004354783 = product of:
      0.060966957 = sum of:
        0.060966957 = weight(_text_:source in 5170) [ClassicSimilarity], result of:
          0.060966957 = score(doc=5170,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.38341597 = fieldWeight in 5170, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5170)
      0.071428575 = coord(1/14)
    
    Abstract
    Catalogers' judgment has been frequently mentioned, but rarely has been researched in formal studies. The purpose of this article is to investigate catalogers' judgment through an exploration of the texts collected in the database of Library and Information Science Source. Verbs, adjectives, and nouns intimately associated with catalogers' judgment were extracted, analyzed, and grouped into 16 categories, which lead to 5 conceptual descriptions. The results of this study provide cataloging professionals with an overall picture on aspects of catalogers' judgment, which may help library school students and graduates and novice catalogers to become independent and confident decision makers relating to cataloging work.
  9. Bös, K.: Aspektorientierte Inhaltserschließung von Romanen und Bildern : ein Vergleich der Ansätze von Annelise Mark Pejtersen und Sara Shatford (2012) 0.00
    0.0039649657 = product of:
      0.055509515 = sum of:
        0.055509515 = weight(_text_:medien in 400) [ClassicSimilarity], result of:
          0.055509515 = score(doc=400,freq=4.0), product of:
            0.15096188 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0320743 = queryNorm
            0.36770552 = fieldWeight in 400, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=400)
      0.071428575 = coord(1/14)
    
    Abstract
    Für die inhaltliche Erschließung von Sach- und Fachliteratur stehen heutzutage etablierte Verfahren und Standards zur Verfügung. Anders verhält es sich dagegen mit der Erschließung von Schöner Literatur und Bildern. Beide Medien sind sehr verschieden und haben doch eines gemeinsam. Sie lassen sich mit den Regeln für Sach- und Fachliteratur nicht zufriedenstellend inhaltlich erschließen. Dieses Problem erkannten in den 1970er und 80er Jahren beide Autoren, deren Methoden ich hier verglichen habe. Annelise Mark Pejtersen bemühte sich um eine Lösung für die Schöne Literatur und wählte dabei einen empirischen Ansatz. Sara Shatford versuchte durch theoretische Überlegungen eine Lösung für Bilder zu erarbeiten. Der empirische wie der theoretische Ansatz führten zu Methoden, die das jeweilige Medium unter verschiedenen Aspekten betrachten. Diese Aspekten basieren in beiden Fällen auf denselben Fragen. Dennoch unterscheiden sie sich stark voneinander sowohl im Hinblick auf die Inhalte, die sie aufnehmen können, als auch hinsichtlich ihrer Struktur. Eine Anwendung einer der Methoden auf das jeweils andere Medium erscheint daher nicht sinnvoll. In dieser Arbeit werden die Methoden von Pejtersen und Shatford zunächst einzeln erläutert. Im Anschluss werden die Aspekte beider Methoden vergleichend gegenübergestellt. Dazu werden ausgewählte Beispiele mit beiden Methoden erschlossen. Abschließend wird geprüft, ob die wechselseitige Erschließung, wie sie im Vergleich angewendet wurde, in der Praxis sinnvoll ist und ob es Medien gibt, deren Erschließung mit beiden Methoden interessant wäre.
  10. Rosso, M.A.: User-based identification of Web genres (2008) 0.00
    0.003566344 = product of:
      0.049928814 = sum of:
        0.049928814 = weight(_text_:web in 1863) [ClassicSimilarity], result of:
          0.049928814 = score(doc=1863,freq=14.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.47698978 = fieldWeight in 1863, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1863)
      0.071428575 = coord(1/14)
    
    Abstract
    This research explores the use of genre as a document descriptor in order to improve the effectiveness of Web searching. A major issue to be resolved is the identification of what document categories should be used as genres. As genre is a kind of folk typology, document categories must enjoy widespread recognition by their intended user groups in order to qualify as genres. Three user studies were conducted to develop a genre palette and show that it is recognizable to users. (Palette is a term used to denote a classification, attributable to Karlgren, Bretan, Dewe, Hallberg, and Wolkert, 1998.) To simplify the users' classification task, it was decided to focus on Web pages from the edu domain. The first study was a survey of user terminology for Web pages. Three participants separated 100 Web page printouts into stacks according to genre, assigning names and definitions to each genre. The second study aimed to refine the resulting set of 48 (often conceptually and lexically similar) genre names and definitions into a smaller palette of user-preferred terminology. Ten participants classified the same 100 Web pages. A set of five principles for creating a genre palette from individuals' sortings was developed, and the list of 48 was trimmed to 18 genres. The third study aimed to show that users would agree on the genres of Web pages when choosing from the genre palette. In an online experiment in which 257 participants categorized a new set of 55 pages using the 18 genres, on average, over 70% agreed on the genre of each page. Suggestions for improving the genre palette and future directions for the work are discussed.
  11. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment strength detection for the social web (2012) 0.00
    0.0033017928 = product of:
      0.046225097 = sum of:
        0.046225097 = weight(_text_:web in 4972) [ClassicSimilarity], result of:
          0.046225097 = score(doc=4972,freq=12.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.4416067 = fieldWeight in 4972, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
      0.071428575 = coord(1/14)
    
    Abstract
    Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, Runners World, BBC Forums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
  12. Sauperl, A.: Catalogers' common ground and shared knowledge (2004) 0.00
    0.0031105594 = product of:
      0.04354783 = sum of:
        0.04354783 = weight(_text_:source in 2069) [ClassicSimilarity], result of:
          0.04354783 = score(doc=2069,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.27386856 = fieldWeight in 2069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2069)
      0.071428575 = coord(1/14)
    
    Abstract
    The problem of multiple interpretations of meaning in the indexing process has been mostly avoided by information scientists. Among the few who have addressed this question are Clare Beghtol and Jens Erik Mai. Their findings and findings of other researchers in the area of information science, social psychology, and psycholinguistics indicate that the source of the problem might lie in the background and culture of each indexer or cataloger. Are the catalogers aware of the problem? A general model of the indexing process was developed from observations and interviews of 12 catalogers in three American academic libraries. The model is illustrated with a hypothetical cataloger's process. The study with catalogers revealed that catalogers are aware of the author's, the user's, and their own meaning, but do not try to accommodate them all. On the other hand, they make every effort to build common ground with catalog users by studying documents related to the document being cataloged, and by considering catalog records and subject headings related to the subject identified in the document being cataloged. They try to build common ground with other catalogers by using cataloging tools and by inferring unstated rules of cataloging from examples in the catalogs.
  13. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.00
    0.003109251 = product of:
      0.021764755 = sum of:
        0.017419131 = weight(_text_:source in 1858) [ClassicSimilarity], result of:
          0.017419131 = score(doc=1858,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.10954742 = fieldWeight in 1858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.015625 = fieldNorm(doc=1858)
        0.0043456247 = product of:
          0.008691249 = sum of:
            0.008691249 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.008691249 = score(doc=1858,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Rez. in JASIST 54(2003) no.4, S.356-357 (S.J. Lincicum): "Reliance upon shared cataloging in academic libraries in the United States has been driven largely by the need to reduce the expense of cataloging operations without muck regard for the Impact that this approach might have an the quality of the records included in local catalogs. In recent years, ever increasing pressures have prompted libraries to adopt practices such as "rapid" copy cataloging that purposely reduce the scrutiny applied to bibliographic records downloaded from shared databases, possibly increasing the number of errors that slip through unnoticed. Errors in bibliographic records can lead to serious problems for library catalog users. If the data contained in bibliographic records is inaccurate, users will have difficulty discovering and recognizing resources in a library's collection that are relevant to their needs. Thus, it has become increasingly important to understand the extent and nature of errors that occur in the records found in large shared bibliographic databases, such as OCLC WorldCat, to develop cataloging practices optimized for the shared cataloging environment. Although this monograph raises a few legitimate concerns about recent trends in cataloging practice, it fails to provide the "detailed look" at misinformation in library catalogs arising from linguistic errors and mistakes in subject analysis promised by the publisher. A basic premise advanced throughout the text is that a certain amount of linguistic and subject knowledge is required to catalog library materials effectively. The author emphasizes repeatedly that most catalogers today are asked to catalog an increasingly diverse array of materials, and that they are often required to work in languages or subject areas of which they have little or no knowledge. He argues that the records contributed to shared databases are increasingly being created by catalogers with inadequate linguistic or subject expertise. This adversely affects the quality of individual library catalogs because errors often go uncorrected as records are downloaded from shared databases to local catalogs by copy catalogers who possess even less knowledge. Calling misinformation an "evil phenomenon," Bade states that his main goal is to discuss, "two fundamental types of misinformation found in bibliographic and authority records in library catalogs: that arising from linguistic errors, and that caused by errors in subject analysis, including missing or wrong subject headings" (p. 2). After a superficial discussion of "other" types of errors that can occur in bibliographic records, such as typographical errors and errors in the application of descriptive cataloging rules, Bade begins his discussion of linguistic errors. He asserts that sharing bibliographic records created by catalogers with inadequate linguistic or subject knowledge has, "disastrous effects an the library community" (p. 6). To support this bold assertion, Bade provides as evidence little more than a laundry list of errors that he has personally observed in bibliographic records over the years. When he eventually cites several studies that have addressed the availability and quality of records available for materials in languages other than English, he fails to describe the findings of these studies in any detail, let alone relate the findings to his own observations in a meaningful way. Bade claims that a lack of linguistic expertise among catalogers is the "primary source for linguistic misinformation in our databases" (p. 10), but he neither cites substantive data from existing studies nor provides any new data regarding the overall level of linguistic knowledge among catalogers to support this claim. The section concludes with a brief list of eight sensible, if unoriginal, suggestions for coping with the challenge of cataloging materials in unfamiliar languages.
  14. Ackermann, A.: Zur Rolle der Inhaltsanalyse bei der Sacherschließung : theoretischer Anspruch und praktische Wirklichkeit in der RSWK (2001) 0.00
    0.0016464919 = product of:
      0.023050886 = sum of:
        0.023050886 = weight(_text_:benutzer in 2061) [ClassicSimilarity], result of:
          0.023050886 = score(doc=2061,freq=2.0), product of:
            0.18291734 = queryWeight, product of:
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.0320743 = queryNorm
            0.12601805 = fieldWeight in 2061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.015625 = fieldNorm(doc=2061)
      0.071428575 = coord(1/14)
    
    Content
    Das eigentlich Überraschende bei der Untersuchung der von den RSWK zur Illustration ihrer Auflagen selbst angegebenen Beispiele ist aber etwas ganz anderes: Die RSWK-Beispiele demonstrieren mit wenigen Ausnahmen immer eine angemessene Inhaltsanalyse, äußerst problematisch ist jedoch häufig die Repräsentation dieser Ergebnisse in den Schlagwörtern bzw. Schlagwortketten. Hieraus lassen sich sowohl für das Problem der Inhaltsanalyse als auch für die RSWK verschiedene Schlüsse ziehen: So schwierig die Inhaltsanalyse als theoretisches Konzept auch sein mag, so wenig bereitet es fachwissenschaftlich geschulten Indexierern offensichtlich Probleme, die Inhalte von Dokumenten angemessen zu beschreiben. Hier liegt die Vermutung nahe, daß für eine aus inhaltsanalytischer Sicht angemessene Beschreibung von Dokumenten eine langjährige Übung der Indexierer im generellen Umgang mit Texten entscheidender ist als das Vorhandensein eines schlüssigen Konzepts zur Inhaltsanalyse in einem Regelwerk zur Sacherschließung. Dies heißt nicht, daß damit ein schlüssiges Konzept für die Inhaltsanalyse obsolet würde, sondern betont vielmehr die Notwendigkeit einer angemessenen Berücksichtigung von inhaltsanalytischen Gesichtspunkten im Regelwerk einer Schlagwortkatalogisierung. Daß die RSWK entgegen ihrem Anspruch, mit den Schlagwortketten informative Kurz-Abstracts zu liefern, in ihrer Dokumentenbeschreibung in einigen Fällen zu wenig informativ sind, hängt mit einem falsch verstandenen Anspruch von Präzision zusammen, der sich in der Praxis des engen Schlagworts verkörpert. In den in dieser Arbeit behandelten Fällen mangelt es den Beschreibungen zu einem Dokument aufgrund einer zu spezifischen Sacherschließung an wichtigem Orientierungswissen für den Benutzer, das sich in der Regel in den Verweisungen der Schlagwortnormdatei verbirgt. Die Einordnung von Dokumenten in einen größeren systematischen Zusammenhang, was sich beispielsweise auch durch die Vergabe von im Dokument nicht vorkommenden Oberbegriffen realisieren läßt, ist ein wichtiges Anliegen der Inhaltsanalyse, das natürlich ebenso Konsequenzen für die Suchbarkeit der Dokumente hat.
  15. Marsh, E.E.; White, M.D.: ¬A taxonomy of relationships between images and text (2003) 0.00
    0.0016175414 = product of:
      0.02264558 = sum of:
        0.02264558 = weight(_text_:web in 4444) [ClassicSimilarity], result of:
          0.02264558 = score(doc=4444,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.21634221 = fieldWeight in 4444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4444)
      0.071428575 = coord(1/14)
    
    Abstract
    The paper establishes a taxonomy of image-text relationships that reflects the ways that images and text interact. It is applicable to all subject areas and document types. The taxonomy was developed to answer the research question: how does an illustration relate to the text with which it is associated, or, what are the functions of illustration? Developed in a two-stage process - first, analysis of relevant research in children's literature, dictionary development, education, journalism, and library and information design and, second, subsequent application of the first version of the taxonomy to 954 image-text pairs in 45 Web pages (pages with educational content for children, online newspapers, and retail business pages) - the taxonomy identifies 49 relationships and groups them in three categories according to the closeness of the conceptual relationship between image and text. The paper uses qualitative content analysis to illustrate use of the taxonomy to analyze four image-text pairs in government publications and discusses the implications of the research for information retrieval and document design.
  16. Allen, R.B.; Wu, Y.: Metrics for the scope of a collection (2005) 0.00
    0.0016175414 = product of:
      0.02264558 = sum of:
        0.02264558 = weight(_text_:web in 4570) [ClassicSimilarity], result of:
          0.02264558 = score(doc=4570,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.21634221 = fieldWeight in 4570, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4570)
      0.071428575 = coord(1/14)
    
    Abstract
    Some collections cover many topics, while others are narrowly focused an a limited number of topics. We introduce the concept of the "scope" of a collection of documents and we compare two ways of measuring lt. These measures are based an the distances between documents. The first uses the overlap of words between pairs of documents. The second measure uses a novel method that calculates the semantic relatedness to pairs of words from the documents. Those values are combined to obtain an overall distance between the documents. The main validation for the measures compared Web pages categorized by Yahoo. Sets of pages sampied from broad categories were determined to have a higher scope than sets derived from subcategories. The measure was significant and confirmed the expected difference in scope. Finally, we discuss other measures related to scope.
  17. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.00
    0.0015520089 = product of:
      0.021728124 = sum of:
        0.021728124 = product of:
          0.04345625 = sum of:
            0.04345625 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.04345625 = score(doc=5835,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    5. 8.2006 13:22:44
  18. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.00
    0.0012416071 = product of:
      0.017382499 = sum of:
        0.017382499 = product of:
          0.034764998 = sum of:
            0.034764998 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.034764998 = score(doc=5830,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    5. 8.2006 13:22:08
  19. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.00
    0.0010974361 = product of:
      0.015364104 = sum of:
        0.015364104 = product of:
          0.030728208 = sum of:
            0.030728208 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.030728208 = score(doc=4888,freq=4.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  20. Xie, H.; Li, X.; Wang, T.; Lau, R.Y.K.; Wong, T.-L.; Chen, L.; Wang, F.L.; Li, Q.: Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy (2016) 0.00
    0.0010783611 = product of:
      0.015097054 = sum of:
        0.015097054 = weight(_text_:web in 2671) [ClassicSimilarity], result of:
          0.015097054 = score(doc=2671,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.14422815 = fieldWeight in 2671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
      0.071428575 = coord(1/14)
    
    Abstract
    In recent years, there has been a rapid growth of user-generated data in collaborative tagging (a.k.a. folksonomy-based) systems due to the prevailing of Web 2.0 communities. To effectively assist users to find their desired resources, it is critical to understand user behaviors and preferences. Tag-based profile techniques, which model users and resources by a vector of relevant tags, are widely employed in folksonomy-based systems. This is mainly because that personalized search and recommendations can be facilitated by measuring relevance between user profiles and resource profiles. However, conventional measurements neglect the sentiment aspect of user-generated tags. In fact, tags can be very emotional and subjective, as users usually express their perceptions and feelings about the resources by tags. Therefore, it is necessary to take sentiment relevance into account into measurements. In this paper, we present a novel generic framework SenticRank to incorporate various sentiment information to various sentiment-based information for personalized search by user profiles and resource profiles. In this framework, content-based sentiment ranking and collaborative sentiment ranking methods are proposed to obtain sentiment-based personalized ranking. To the best of our knowledge, this is the first work of integrating sentiment information to address the problem of the personalized tag-based search in collaborative tagging systems. Moreover, we compare the proposed sentiment-based personalized search with baselines in the experiments, the results of which have verified the effectiveness of the proposed framework. In addition, we study the influences by popular sentiment dictionaries, and SenticNet is the most prominent knowledge base to boost the performance of personalized search in folksonomy.

Years

Languages

Types