Search (3980 results, page 3 of 199)

  1. Bidwell, S.: Curiosities of light and sight (1899) 0.06
    0.06371069 = product of:
      0.12742138 = sum of:
        0.12742138 = sum of:
          0.09316782 = weight(_text_:light in 5783) [ClassicSimilarity], result of:
            0.09316782 = score(doc=5783,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.31904373 = fieldWeight in 5783, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5783)
          0.034253553 = weight(_text_:22 in 5783) [ClassicSimilarity], result of:
            0.034253553 = score(doc=5783,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.19345059 = fieldWeight in 5783, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5783)
      0.5 = coord(1/2)
    
    Date
    6. 3.2020 17:58:22
  2. Ilhan, A.; Fietkiewicz, K.J.: Data privacy-related behavior and concerns of activity tracking technology users from Germany and the USA (2021) 0.06
    0.06371069 = product of:
      0.12742138 = sum of:
        0.12742138 = sum of:
          0.09316782 = weight(_text_:light in 180) [ClassicSimilarity], result of:
            0.09316782 = score(doc=180,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.31904373 = fieldWeight in 180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=180)
          0.034253553 = weight(_text_:22 in 180) [ClassicSimilarity], result of:
            0.034253553 = score(doc=180,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.19345059 = fieldWeight in 180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=180)
      0.5 = coord(1/2)
    
    Abstract
    Purpose This investigation aims to examine the differences and similarities between activity tracking technology users from two regions (the USA and Germany) in their intended privacy-related behavior. The focus lies on data handling after hypothetical discontinuance of use, data protection and privacy policy seeking, and privacy concerns. Design/methodology/approach The data was collected through an online survey in 2019. In order to identify significant differences between participants from Germany and the USA, the chi-squared test and the Mann-Whitney U test were applied. Findings The intensity of several privacy-related concerns was significantly different between the two groups. The majority of the participants did not inform themselves about the respective data privacy policies or terms and conditions before installing an activity tracking application. The majority of the German participants knew that they could request the deletion of all their collected data. In contrast, only 35% out of 68 participants from the US knew about this option. Research limitations/implications This study intends to raise awareness about managing the collected health and fitness data after stopping to use activity tracking technologies. Furthermore, to reduce privacy and security concerns, the involvement of the government, companies and users is necessary to handle and share data more considerably and in a sustainable way. Originality/value This study sheds light on users of activity tracking technologies from a broad perspective (here, participants from the USA and Germany). It incorporates not only concerns and the privacy paradox but (intended) user behavior, including seeking information on data protection and privacy policy and handling data after hypothetical discontinuance of use of the technology.
    Date
    20. 1.2015 18:30:22
  3. Yu, C.; Xue, H.; An, L.; Li, G.: ¬A lightweight semantic-enhanced interactive network for efficient short-text matching (2023) 0.06
    0.06371069 = product of:
      0.12742138 = sum of:
        0.12742138 = sum of:
          0.09316782 = weight(_text_:light in 890) [ClassicSimilarity], result of:
            0.09316782 = score(doc=890,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.31904373 = fieldWeight in 890, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=890)
          0.034253553 = weight(_text_:22 in 890) [ClassicSimilarity], result of:
            0.034253553 = score(doc=890,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.19345059 = fieldWeight in 890, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=890)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge-enhanced short-text matching has been a significant task attracting much attention in recent years. However, the existing approaches cannot effectively balance effect and efficiency. Effective models usually consist of complex network structures leading to slow inference speed and the difficulties of applications in actual practice. In addition, most knowledge-enhanced models try to link the mentions in the text to the entities of the knowledge graphs-the difficulties of entity linking decrease the generalizability among different datasets. To address these problems, we propose a lightweight Semantic-Enhanced Interactive Network (SEIN) model for efficient short-text matching. Unlike most current research, SEIN employs an unsupervised method to select WordNet's most appropriate paraphrase description as the external semantic knowledge. It focuses on integrating semantic information and interactive information of text while simplifying the structure of other modules. We conduct intensive experiments on four real-world datasets, that is, Quora, Twitter-URL, SciTail, and SICK-E. Compared with state-of-the-art methods, SEIN achieves the best performance on most datasets. The experimental results proved that introducing external knowledge could effectively improve the performance of the short-text matching models. The research sheds light on the role of lightweight models in leveraging external knowledge to improve the effect of short-text matching.
    Date
    22. 1.2023 19:05:27
  4. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.06
    0.058687486 = product of:
      0.11737497 = sum of:
        0.11737497 = sum of:
          0.09682284 = weight(_text_:light in 1184) [ClassicSimilarity], result of:
            0.09682284 = score(doc=1184,freq=6.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.33156 = fieldWeight in 1184, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
          0.02055213 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.02055213 = score(doc=1184,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.116070345 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
      0.5 = coord(1/2)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
  5. ap: Mehr als 320 Millionen Seiten im Internet (1998) 0.06
    0.055900697 = product of:
      0.11180139 = sum of:
        0.11180139 = product of:
          0.22360279 = sum of:
            0.22360279 = weight(_text_:light in 833) [ClassicSimilarity], result of:
              0.22360279 = score(doc=833,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.765705 = fieldWeight in 833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.09375 = fieldNorm(doc=833)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Gegenüberstellung der Vollständigkeit verschiedener Suchmaschinen gemäß einer Untersuchung des Computerherstellers NEC: HotBot (34%); Altavista (28%); Northern Light (20%); Excite (14%), Lycos (3%)
  6. Krellenstein, M.: Document classification at Northern Light (1999) 0.06
    0.055900697 = product of:
      0.11180139 = sum of:
        0.11180139 = product of:
          0.22360279 = sum of:
            0.22360279 = weight(_text_:light in 4435) [ClassicSimilarity], result of:
              0.22360279 = score(doc=4435,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.765705 = fieldWeight in 4435, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4435)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.05
    0.053539254 = product of:
      0.10707851 = sum of:
        0.10707851 = product of:
          0.3212355 = sum of:
            0.3212355 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.3212355 = score(doc=140,freq=2.0), product of:
                0.42868128 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050563898 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  8. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.05
    0.053539254 = product of:
      0.10707851 = sum of:
        0.10707851 = product of:
          0.3212355 = sum of:
            0.3212355 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.3212355 = score(doc=230,freq=2.0), product of:
                0.42868128 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050563898 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  9. Kuronen, T.: Ranganathanin lait ja virtuaalikirjasto (1996) 0.05
    0.052703682 = product of:
      0.105407365 = sum of:
        0.105407365 = product of:
          0.21081473 = sum of:
            0.21081473 = weight(_text_:light in 6704) [ClassicSimilarity], result of:
              0.21081473 = score(doc=6704,freq=4.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.7219136 = fieldWeight in 6704, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6704)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Evaluates the potential of the electronic library (virtual library) to provide information for the public in light of Ranganathan's five laws of library science. Rephrases certain laws in the context of electronic information resources and points to opportunities to make additions to the laws in light of news services and the Internet
  10. Neumeier, F.: Verständnisprobleme : Internet Suchmaschinen (1998) 0.05
    0.052703682 = product of:
      0.105407365 = sum of:
        0.105407365 = product of:
          0.21081473 = sum of:
            0.21081473 = weight(_text_:light in 40) [ClassicSimilarity], result of:
              0.21081473 = score(doc=40,freq=4.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.7219136 = fieldWeight in 40, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0625 = fieldNorm(doc=40)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vorgestellt und bewertet werden: AltaVista (Note: 4); Excite (2); Hotbot (3); InfoSeek (3); Lycos (4); Northern Light (5); Open Text (5); WebCrawler (4); Yahoo (3)
    Object
    Northern Light
  11. Stock, M.; Stock, W.G.: Internet-Suchwerkzeuge im Vergleich (III) : Informationslinguistik und -statistik: AltaVista, FAST und Northern Light (2001) 0.05
    0.052703682 = product of:
      0.105407365 = sum of:
        0.105407365 = product of:
          0.21081473 = sum of:
            0.21081473 = weight(_text_:light in 5578) [ClassicSimilarity], result of:
              0.21081473 = score(doc=5578,freq=4.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.7219136 = fieldWeight in 5578, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5578)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Suchmaschinen im World Wide Web arbeiten automatisch: Sie spüren Dokumente auf, indexieren sie, halten die Datenbank (mehr oder minder) aktuell und bieten den Kunden Retrievaloberflächen an. In unserem Known-Item-Retrievaltest (Password 11/2000) schnitten - in dieser Reihenfolge - Google, Alta Vista, Northern Light und FAST (All the Web) am besten ab. Die letzten drei Systeme arbeiten mit einer Kombination aus informationslinguistischen und informationsstatistischen Algorithmen, weshalb wir sie hier gemeinsam besprechen wollen. Im Zentrum unserer informationswissenschaftlichen Analysen stehen die "Highlights" der jeweiligen Suchwerkzeuge
  12. ¬Die Wissenschaft und ihre Sprachen (2007) 0.05
    0.05096855 = product of:
      0.1019371 = sum of:
        0.1019371 = sum of:
          0.07453426 = weight(_text_:light in 301) [ClassicSimilarity], result of:
            0.07453426 = score(doc=301,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.255235 = fieldWeight in 301, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.03125 = fieldNorm(doc=301)
          0.027402842 = weight(_text_:22 in 301) [ClassicSimilarity], result of:
            0.027402842 = score(doc=301,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.15476047 = fieldWeight in 301, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=301)
      0.5 = coord(1/2)
    
    Content
    Aus dem Inhalt: Konrad Ehlich / Dorothee Heller: Einleitung - Konrad Ehlich: Mehrsprachigkeit in der Wissenschaftskommunikation - Illusion oder Notwendigkeit? - Christian Fandrych: Bildhaftigkeit und Formelhaftigkeit in der allgemeinen Wissenschaftssprache als Herausforderung für Deutsch als Fremdsprache - Dorothee Heller: L'autore traccia un quadro... - Beobachtungen zur Versprachlichung wissenschaftlichen Handelns im Deutschen und Italienischen - Kristin Stezano Cotelo: Die studentische Seminararbeit - studentische Wissensverarbeitung zwischen Alltagswissen und wissenschaftlichem Wissen - Sabine Ylönen: Training wissenschaftlicher Kommunikation mit E-Materialien. Beispiel mündliche Hochschulprüfung - Susanne Guckelsberger: Zur kommunikativen Struktur von mündlichen Referaten in universitären Lehrveranstaltungen - Giancarmine Bongo: Asymmetrien in wissenschaftlicher Kommunikation - Klaus-Dieter Baumann: Die interdisziplinäre Analyse rhetorisch-stilistischer Mittel der Fachkommunikation als ein Zugang zum Fachdenken - Marcello Soffritti: Der übersetzungstheoretische und -kritische Diskurs als fachsprachliche Kommunikation. Ansätze zu Beschreibung und Wertung - Karl Gerhard Hempel: Nationalstile in archäologischen Fachtexten. Bemerkungen zu `Stilbeschreibungen' im Deutschen und im Italienischen - Ingrid Wiese: Zur Situation des Deutschen als Wissenschaftssprache in der Medizin - Winfried Thielmann: «...it seems that light is propagated in time... » - zur Befreiung des wissenschaftlichen Erkenntnisprozesses durch die Vernakulärsprache Englisch.
    Date
    7. 5.2007 12:16:22
  13. Hajibayova, L.; Jacob, E.K.: User-generated genre tags through the lens of genre theories (2014) 0.05
    0.05096855 = product of:
      0.1019371 = sum of:
        0.1019371 = sum of:
          0.07453426 = weight(_text_:light in 1450) [ClassicSimilarity], result of:
            0.07453426 = score(doc=1450,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.255235 = fieldWeight in 1450, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.03125 = fieldNorm(doc=1450)
          0.027402842 = weight(_text_:22 in 1450) [ClassicSimilarity], result of:
            0.027402842 = score(doc=1450,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.15476047 = fieldWeight in 1450, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1450)
      0.5 = coord(1/2)
    
    Abstract
    LIS genre studies have suggested that representing the genre of a resource could provide better knowledge representation, organization and retrieval (e.g., Andersen, 2008; Crowston & Kwasnik, 2003). Beghtol (2001) argues that genre analysis could be a useful tool for creating a "framework of analysis for a domain ... [to] structure and interpret texts, events, ideas, decisions, explanations and every other human activity in that domain" (p. 19). Although some studies of user-generated tagging vocabularies have found a preponderance of content-related tags (e.g., Munk & Mork, 2007), Lamere's (2008) study of the most frequently applied tags at Last.fm found that tags representing musical genres were favored by taggers. Studies of user-generated genre tags suggest that, unlike traditional indexing, which generally assigns a single genre, users' assignments of genre-related tags provide better representation of the fuzziness at the boundaries of genre categories (Inskip, 2009). In this way, user-generated genre tags are more in line with Bakhtin's (Bakhtin & Medvedev, 1928/1985) conceptualization of genre as an "aggregate of the means for seeing and conceptualizin reality" (p. 137). For Bakhtin (1986), genres are kinds of practice characterized by their "addressivity" (p. 95): Different genres correspond to different "conceptions of the addressee" and are "determined by that area of human activity and everyday life to which the given utterance is related" (p.95). Miller (1984) argues that genre refers to a "conventional category of discourse based in large-scale typification of rhetorical action; as action, it acquires meaning from situation and from the social context in which that situation arose" (p. 163). Genre is part of a social context that produces, reproduces, modifies and ultimately represents a particular text, but how to reunite genre and situation (or text and context) in systems of knowledge organization has not been addressed. Based on Devitt's (1993) argument suggesting that "our construction of genre is what helps us to construct a situation" (p. 577), one way to represent genre as "typified rhetorical actions based in recurrent situations" (Miller, 1984, p. 159) would be to employ genre tags generated by a particular group or community of users. This study suggests application of social network analysis to detect communities (Newman, 2006) of genre taggers and argues that communities of genre taggers can better define the nature and constitution of a discourse community while simultaneously shedding light on multifaceted representations of the resource genres.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  14. Huvila, I.: Situational appropriation of information (2015) 0.05
    0.05096855 = product of:
      0.1019371 = sum of:
        0.1019371 = sum of:
          0.07453426 = weight(_text_:light in 2596) [ClassicSimilarity], result of:
            0.07453426 = score(doc=2596,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.255235 = fieldWeight in 2596, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.03125 = fieldNorm(doc=2596)
          0.027402842 = weight(_text_:22 in 2596) [ClassicSimilarity], result of:
            0.027402842 = score(doc=2596,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.15476047 = fieldWeight in 2596, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2596)
      0.5 = coord(1/2)
    
    Abstract
    Purpose In contrast to the interest of describing and managing the social processes of knowing, information science and information and knowledge management research have put less emphasis on discussing how particular information becomes usable and how it is used in different contexts and situations. The purpose of this paper is to address this major gap, and introduce and discuss the applicability of the notion of situational appropriation of information for shedding light on this particular process in the context of daily information work practices of professionals. Design/methodology/approach The study is based on the analysis of 25 qualitative interviews of archives, library and museum professionals conducted in two Nordic countries. Findings The study presents examples of how individuals appropriate different tangible and intangible assets as information on the basis of the situation in hand. Research limitations/implications The study proposes a new conceptual tool for articulating and conducting research on the process how information becomes useful in the situation in hand. Practical implications The situational appropriation of information perspective redefines the role of information management to incorporate a comprehensive awareness of the situations when information is useful and is being used. A better understanding how information becomes useful in diverse situations helps to discern the active role of contextual and situational effects and to exploit and take them into account as a part of the management of information and knowledge processes. Originality/value In contrast to orthodoxies of information science and information and knowledge management research, the notion of situational appropriation of information represents an alternative approach to the conceptualisation of information utilisation. It helps to frame particular types of instances of information use that are not necessarily addressed within the objectivistic, information seeker or learning oriented paradigms of information and knowledge management.
    Date
    20. 1.2015 18:30:22
  15. Yu, L.; Fan, Z.; Li, A.: ¬A hierarchical typology of scholarly information units : based on a deduction-verification study (2020) 0.05
    0.05096855 = product of:
      0.1019371 = sum of:
        0.1019371 = sum of:
          0.07453426 = weight(_text_:light in 5655) [ClassicSimilarity], result of:
            0.07453426 = score(doc=5655,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.255235 = fieldWeight in 5655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.03125 = fieldNorm(doc=5655)
          0.027402842 = weight(_text_:22 in 5655) [ClassicSimilarity], result of:
            0.027402842 = score(doc=5655,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.15476047 = fieldWeight in 5655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5655)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to lay a theoretical foundation for identifying operational information units for library and information professional activities in the context of scholarly communication. Design/methodology/approach The study adopts a deduction-verification approach to formulate a typology of units for scholarly information. It first deduces possible units from an existing conceptualization of information, which defines information as the combined product of data and meaning, and then tests the usefulness of these units via two empirical investigations, one with a group of scholarly papers and the other with a sample of scholarly information users. Findings The results show that, on defining an information unit as a piece of information that is complete in both data and meaning, to such an extent that it remains meaningful to its target audience when retrieved and displayed independently in a database, it is then possible to formulate a hierarchical typology of units for scholarly information. The typology proposed in this study consists of three levels, which in turn, consists of 1, 5 and 44 units, respectively. Research limitations/implications The result of this study has theoretical implications on both the philosophical and conceptual levels: on the philosophical level, it hinges on, and reinforces the objective view of information; on the conceptual level, it challenges the conceptualization of work by IFLA's Functional Requirements for Bibliographic Records and Library Reference Model but endorses that by Library of Congress's BIBFRAME 2.0 model. Practical implications It calls for reconsideration of existing operational units in a variety of library and information activities. Originality/value The study strengthens the conceptual foundation of operational information units and brings to light the primacy of "one work" as an information unit and the possibility for it to be supplemented by smaller units.
    Date
    14. 1.2020 11:15:22
  16. #2434 0.05
    0.047954973 = product of:
      0.095909946 = sum of:
        0.095909946 = product of:
          0.19181989 = sum of:
            0.19181989 = weight(_text_:22 in 2433) [ClassicSimilarity], result of:
              0.19181989 = score(doc=2433,freq=2.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                1.0833232 = fieldWeight in 2433, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=2433)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 9.2011 12:28:22
  17. #2819 0.05
    0.047954973 = product of:
      0.095909946 = sum of:
        0.095909946 = product of:
          0.19181989 = sum of:
            0.19181989 = weight(_text_:22 in 2818) [ClassicSimilarity], result of:
              0.19181989 = score(doc=2818,freq=2.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                1.0833232 = fieldWeight in 2818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=2818)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1998 19:49:25
  18. #4316 0.05
    0.047954973 = product of:
      0.095909946 = sum of:
        0.095909946 = product of:
          0.19181989 = sum of:
            0.19181989 = weight(_text_:22 in 4315) [ClassicSimilarity], result of:
              0.19181989 = score(doc=4315,freq=2.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                1.0833232 = fieldWeight in 4315, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=4315)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1998 19:49:25
  19. #7401 0.05
    0.047954973 = product of:
      0.095909946 = sum of:
        0.095909946 = product of:
          0.19181989 = sum of:
            0.19181989 = weight(_text_:22 in 7400) [ClassicSimilarity], result of:
              0.19181989 = score(doc=7400,freq=2.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                1.0833232 = fieldWeight in 7400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=7400)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1998 19:49:25
  20. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.05
    0.046846848 = product of:
      0.093693696 = sum of:
        0.093693696 = product of:
          0.28108108 = sum of:
            0.28108108 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.28108108 = score(doc=306,freq=2.0), product of:
                0.42868128 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050563898 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.

Languages

Types

  • a 3346
  • m 369
  • el 180
  • s 147
  • b 39
  • x 36
  • i 23
  • r 18
  • ? 8
  • p 4
  • d 3
  • n 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications