Search (1211 results, page 1 of 61)

  • × year_i:[2010 TO 2020}
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.16
    0.16316654 = sum of:
      0.120454706 = product of:
        0.48181883 = sum of:
          0.48181883 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
            0.48181883 = score(doc=973,freq=2.0), product of:
              0.42865068 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.050560288 = queryNorm
              1.1240361 = fieldWeight in 973, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.09375 = fieldNorm(doc=973)
        0.25 = coord(1/4)
      0.042711835 = product of:
        0.08542367 = sum of:
          0.08542367 = weight(_text_:k in 973) [ClassicSimilarity], result of:
            0.08542367 = score(doc=973,freq=2.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.47329018 = fieldWeight in 973, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.09375 = fieldNorm(doc=973)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Gödert, W.; Lepsky, K.: Informationelle Kompetenz : ein humanistischer Entwurf (2019) 0.11
    0.11341971 = sum of:
      0.07026525 = product of:
        0.281061 = sum of:
          0.281061 = weight(_text_:3a in 5955) [ClassicSimilarity], result of:
            0.281061 = score(doc=5955,freq=2.0), product of:
              0.42865068 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.050560288 = queryNorm
              0.65568775 = fieldWeight in 5955, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5955)
        0.25 = coord(1/4)
      0.04315446 = product of:
        0.08630892 = sum of:
          0.08630892 = weight(_text_:k in 5955) [ClassicSimilarity], result of:
            0.08630892 = score(doc=5955,freq=6.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.4781949 = fieldWeight in 5955, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5955)
        0.5 = coord(1/2)
    
    Classification
    OKH (FH K)
    Footnote
    Rez. in: Philosophisch-ethische Rezensionen vom 09.11.2019 (Jürgen Czogalla), Unter: https://philosophisch-ethische-rezensionen.de/rezension/Goedert1.html. In: B.I.T. online 23(2020) H.3, S.345-347 (W. Sühl-Strohmenger) [Unter: https%3A%2F%2Fwww.b-i-t-online.de%2Fheft%2F2020-03-rezensionen.pdf&usg=AOvVaw0iY3f_zNcvEjeZ6inHVnOK]. In: Open Password Nr. 805 vom 14.08.2020 (H.-C. Hobohm) [Unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzE0MywiOGI3NjZkZmNkZjQ1IiwwLDAsMTMxLDFd].
    GHBS
    OKH (FH K)
  3. Deisseroth, K.: Lichtschalter im Gehirn (2011) 0.08
    0.08381316 = product of:
      0.16762632 = sum of:
        0.16762632 = sum of:
          0.08542367 = weight(_text_:k in 4248) [ClassicSimilarity], result of:
            0.08542367 = score(doc=4248,freq=2.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.47329018 = fieldWeight in 4248, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.09375 = fieldNorm(doc=4248)
          0.08220265 = weight(_text_:22 in 4248) [ClassicSimilarity], result of:
            0.08220265 = score(doc=4248,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.46428138 = fieldWeight in 4248, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4248)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2011, H.2, S.22-29
  4. Ceynowa, K.: Research Library Reloaded? : Überlegungen zur Zukunft der geisteswissenschaftlichen Forschungsbibliothek (2018) 0.08
    0.08381316 = product of:
      0.16762632 = sum of:
        0.16762632 = sum of:
          0.08542367 = weight(_text_:k in 4973) [ClassicSimilarity], result of:
            0.08542367 = score(doc=4973,freq=2.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.47329018 = fieldWeight in 4973, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.09375 = fieldNorm(doc=4973)
          0.08220265 = weight(_text_:22 in 4973) [ClassicSimilarity], result of:
            0.08220265 = score(doc=4973,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.46428138 = fieldWeight in 4973, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4973)
      0.5 = coord(1/2)
    
    Date
    1. 2.2019 12:50:22
  5. Ahlgren, P.; Järvelin, K.: Measuring impact of twelve information scientists using the DCI index (2010) 0.07
    0.07487137 = sum of:
      0.05351545 = product of:
        0.2140618 = sum of:
          0.2140618 = weight(_text_:author's in 3593) [ClassicSimilarity], result of:
            0.2140618 = score(doc=3593,freq=4.0), product of:
              0.33977288 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.050560288 = queryNorm
              0.63001436 = fieldWeight in 3593, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.046875 = fieldNorm(doc=3593)
        0.25 = coord(1/4)
      0.021355918 = product of:
        0.042711835 = sum of:
          0.042711835 = weight(_text_:k in 3593) [ClassicSimilarity], result of:
            0.042711835 = score(doc=3593,freq=2.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.23664509 = fieldWeight in 3593, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=3593)
        0.5 = coord(1/2)
    
    Abstract
    The Discounted Cumulated Impact (DCI) index has recently been proposed for research evaluation. In the present work an earlier dataset by Cronin and Meho (2007) is reanalyzed, with the aim of exemplifying the salient features of the DCI index. We apply the index on, and compare our results to, the outcomes of the Cronin-Meho (2007) study. Both authors and their top publications are used as units of analysis, which suggests that, by adjusting the parameters of evaluation according to the needs of research evaluation, the DCI index delivers data on an author's (or publication's) lifetime impact or current impact at the time of evaluation on an author's (or publication's) capability of inviting citations from highly cited later publications as an indication of impact, and on the relative impact across a set of authors (or publications) over their lifetime or currently.
  6. Castle, C.: Getting the central RDM message across : a case study of central versus discipline-specific Research Data Services (RDS) at the University of Cambridge (2019) 0.06
    0.061721757 = sum of:
      0.044596206 = product of:
        0.17838483 = sum of:
          0.17838483 = weight(_text_:author's in 5491) [ClassicSimilarity], result of:
            0.17838483 = score(doc=5491,freq=4.0), product of:
              0.33977288 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.050560288 = queryNorm
              0.52501196 = fieldWeight in 5491, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5491)
        0.25 = coord(1/4)
      0.017125553 = product of:
        0.034251105 = sum of:
          0.034251105 = weight(_text_:22 in 5491) [ClassicSimilarity], result of:
            0.034251105 = score(doc=5491,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.19345059 = fieldWeight in 5491, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5491)
        0.5 = coord(1/2)
    
    Abstract
    RDS are usually cross-disciplinary, centralised services, which are increasingly provided at a university by the academic library and in collaboration with other RDM stakeholders, such as the Research Office. At research-intensive universities, research data is generated in a wide range of disciplines and sub-disciplines. This paper will discuss how providing discipline-specific RDM support is approached by such universities and academic libraries, and the advantages and disadvantages of these central and discipline-specific approaches. A descriptive case study on the author's experiences of collaborating with a central RDS at the University of Cambridge, as a subject librarian embedded in an academic department, is a major component of this paper. The case study describes how centralised RDM services offered by the Office of Scholarly Communication (OSC) have been adapted to meet discipline-specific needs in the Department of Chemistry. It will introduce the department and the OSC, and describe the author's role in delivering RDM training, as well as the Data Champions programme, and their membership of the RDM Project Group. It will describe the outcomes of this collaboration for the Department of Chemistry, and for the centralised service. Centralised and discipline-specific approaches to RDS provision have their own advantages and disadvantages. Supporting the discipline-specific RDM needs of researchers is proving particularly challenging for universities to address sustainably: it requires adequate financial resources and staff skilled (or re-skilled) in RDM. A mixed approach is the most desirable, cost-effective way of providing RDS, but this still has constraints.
    Date
    7. 9.2019 21:30:22
  7. Zhang, Y.: Developing a holistic model for digital library evaluation (2010) 0.06
    0.058391802 = sum of:
      0.037841137 = product of:
        0.15136455 = sum of:
          0.15136455 = weight(_text_:author's in 2360) [ClassicSimilarity], result of:
            0.15136455 = score(doc=2360,freq=2.0), product of:
              0.33977288 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.050560288 = queryNorm
              0.44548744 = fieldWeight in 2360, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.046875 = fieldNorm(doc=2360)
        0.25 = coord(1/4)
      0.020550663 = product of:
        0.041101325 = sum of:
          0.041101325 = weight(_text_:22 in 2360) [ClassicSimilarity], result of:
            0.041101325 = score(doc=2360,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.23214069 = fieldWeight in 2360, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2360)
        0.5 = coord(1/2)
    
    Abstract
    This article reports the author's recent research in developing a holistic model for various levels of digital library (DL) evaluation in which perceived important criteria from heterogeneous stakeholder groups are organized and presented. To develop such a model, the author applied a three-stage research approach: exploration, confirmation, and verification. During the exploration stage, a literature review was conducted followed by an interview, along with a card sorting technique, to collect important criteria perceived by DL experts. Then the criteria identified were used for developing an online survey during the confirmation stage. Survey respondents (431 in total) from 22 countries rated the importance of the criteria. A holistic DL evaluation model was constructed using statistical techniques. Eventually, the verification stage was devised to test the reliability of the model in the context of searching and evaluating an operational DL. The proposed model fills two lacunae in the DL domain: (a) the lack of a comprehensive and flexible framework to guide and benchmark evaluations, and (b) the uncertainty about what divergence exists among heterogeneous DL stakeholders, including general users.
  8. Hjoerland, B.: ¬The importance of theories of knowledge : indexing and information retrieval as an example (2011) 0.06
    0.058391802 = sum of:
      0.037841137 = product of:
        0.15136455 = sum of:
          0.15136455 = weight(_text_:author's in 4359) [ClassicSimilarity], result of:
            0.15136455 = score(doc=4359,freq=2.0), product of:
              0.33977288 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.050560288 = queryNorm
              0.44548744 = fieldWeight in 4359, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.046875 = fieldNorm(doc=4359)
        0.25 = coord(1/4)
      0.020550663 = product of:
        0.041101325 = sum of:
          0.041101325 = weight(_text_:22 in 4359) [ClassicSimilarity], result of:
            0.041101325 = score(doc=4359,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.23214069 = fieldWeight in 4359, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4359)
        0.5 = coord(1/2)
    
    Abstract
    A recent study in information science (IS), raises important issues concerning the value of human indexing and basic theories of indexing and information retrieval, as well as the use of quantitative and qualitative approaches in IS and the underlying theories of knowledge informing the field. The present article uses L&E as the point of departure for demonstrating in what way more social and interpretative understandings may provide fruitful improvements for research in indexing, knowledge organization, and information retrieval. The artcle is motivated by the observation that philosophical contributions tend to be ignored in IS if they are not directly formed as criticisms or invitations to dialogs. It is part of the author's ongoing publication of articles about philosophical issues in IS and it is intended to be followed by analyzes of other examples of contributions to core issues in IS. Although it is formulated as a criticism of a specific paper, it should be seen as part of a general discussion of the philosophical foundation of IS and as a support to the emerging social paradigm in this field.
    Date
    17. 3.2011 19:22:55
  9. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.06
    0.0575402 = product of:
      0.1150804 = sum of:
        0.1150804 = sum of:
          0.07397907 = weight(_text_:k in 690) [ClassicSimilarity], result of:
            0.07397907 = score(doc=690,freq=6.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.40988132 = fieldWeight in 690, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
          0.041101325 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
            0.041101325 = score(doc=690,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.23214069 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
      0.5 = coord(1/2)
    
    Abstract
    We describe the latent semantic indexing subspace signature model (LSISSM) for semantic content representation of unstructured text. Grounded on singular value decomposition, the model represents terms and documents by the distribution signatures of their statistical contribution across the top-ranking latent concept dimensions. LSISSM matches term signatures with document signatures according to their mapping coherence between latent semantic indexing (LSI) term subspace and LSI document subspace. LSISSM does feature reduction and finds a low-rank approximation of scalable and sparse term-document matrices. Experiments demonstrate that this approach significantly improves the performance of major clustering algorithms such as standard K-means and self-organizing maps compared with the vector space model and the traditional LSI model. The unique contribution ranking mechanism in LSISSM also improves the initialization of standard K-means compared with random seeding procedure, which sometimes causes low efficiency and effectiveness of clustering. A two-stage initialization strategy based on LSISSM significantly reduces the running time of standard K-means procedures.
    Date
    23. 3.2013 13:22:36
  10. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.06
    0.055875443 = product of:
      0.111750886 = sum of:
        0.111750886 = sum of:
          0.056949116 = weight(_text_:k in 4331) [ClassicSimilarity], result of:
            0.056949116 = score(doc=4331,freq=2.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.31552678 = fieldWeight in 4331, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0625 = fieldNorm(doc=4331)
          0.054801766 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
            0.054801766 = score(doc=4331,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.30952093 = fieldWeight in 4331, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4331)
      0.5 = coord(1/2)
    
    Date
    15. 3.2011 19:21:22
  11. Hamm, S.; Schneider, K.: Automatische Erschließung von Universitätsdissertationen (2015) 0.06
    0.055875443 = product of:
      0.111750886 = sum of:
        0.111750886 = sum of:
          0.056949116 = weight(_text_:k in 1715) [ClassicSimilarity], result of:
            0.056949116 = score(doc=1715,freq=2.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.31552678 = fieldWeight in 1715, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0625 = fieldNorm(doc=1715)
          0.054801766 = weight(_text_:22 in 1715) [ClassicSimilarity], result of:
            0.054801766 = score(doc=1715,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.30952093 = fieldWeight in 1715, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1715)
      0.5 = coord(1/2)
    
    Source
    Dialog mit Bibliotheken. 27(2015) H.1, S.18-22
  12. Köttke, K.: Kansste getrost vergessen : Wer sich auf digitale Informationssysteme verlässt, setzt kognitive Fähigkeiten frei (2019) 0.06
    0.055875443 = product of:
      0.111750886 = sum of:
        0.111750886 = sum of:
          0.056949116 = weight(_text_:k in 4995) [ClassicSimilarity], result of:
            0.056949116 = score(doc=4995,freq=2.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.31552678 = fieldWeight in 4995, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0625 = fieldNorm(doc=4995)
          0.054801766 = weight(_text_:22 in 4995) [ClassicSimilarity], result of:
            0.054801766 = score(doc=4995,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.30952093 = fieldWeight in 4995, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4995)
      0.5 = coord(1/2)
    
    Date
    13. 2.2019 9:22:14
  13. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.05
    0.05075249 = product of:
      0.10150498 = sum of:
        0.10150498 = sum of:
          0.060403652 = weight(_text_:k in 987) [ClassicSimilarity], result of:
            0.060403652 = score(doc=987,freq=4.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.33466667 = fieldWeight in 987, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.041101325 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
            0.041101325 = score(doc=987,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.23214069 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
      0.5 = coord(1/2)
    
    Classification
    BCA (FH K)
    Date
    23. 7.2017 13:49:22
    GHBS
    BCA (FH K)
  14. Leitfaden zum Forschungsdaten-Management : Handreichungen aus dem WissGrid-Projekt (2013) 0.05
    0.05075249 = product of:
      0.10150498 = sum of:
        0.10150498 = sum of:
          0.060403652 = weight(_text_:k in 1029) [ClassicSimilarity], result of:
            0.060403652 = score(doc=1029,freq=4.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.33466667 = fieldWeight in 1029, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=1029)
          0.041101325 = weight(_text_:22 in 1029) [ClassicSimilarity], result of:
            0.041101325 = score(doc=1029,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.23214069 = fieldWeight in 1029, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1029)
      0.5 = coord(1/2)
    
    Classification
    ANW (FH K)
    Date
    19.12.2015 11:57:22
    GHBS
    ANW (FH K)
  15. Zanibbi, R.; Yuan, B.: Keyword and image-based retrieval for mathematical expressions (2011) 0.05
    0.05075249 = product of:
      0.10150498 = sum of:
        0.10150498 = sum of:
          0.060403652 = weight(_text_:k in 3449) [ClassicSimilarity], result of:
            0.060403652 = score(doc=3449,freq=4.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.33466667 = fieldWeight in 3449, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
          0.041101325 = weight(_text_:22 in 3449) [ClassicSimilarity], result of:
            0.041101325 = score(doc=3449,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.23214069 = fieldWeight in 3449, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
      0.5 = coord(1/2)
    
    Abstract
    Two new methods for retrieving mathematical expressions using conventional keyword search and expression images are presented. An expression-level TF-IDF (term frequency-inverse document frequency) approach is used for keyword search, where queries and indexed expressions are represented by keywords taken from LATEX strings. TF-IDF is computed at the level of individual expressions rather than documents to increase the precision of matching. The second retrieval technique is a form of Content-Base Image Retrieval (CBIR). Expressions are segmented into connected components, and then components in the query expression and each expression in the collection are matched using contour and density features, aspect ratios, and relative positions. In an experiment using ten randomly sampled queries from a corpus of over 22,000 expressions, precision-at-k (k= 20) for the keyword-based approach was higher (keyword: µ= 84.0,s= 19.0, image-based:µ= 32.0,s= 30.7), but for a few of the queries better results were obtained using a combination of the two techniques.
    Date
    22. 2.2017 12:53:49
  16. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.05
    0.050418943 = product of:
      0.10083789 = sum of:
        0.10083789 = sum of:
          0.042711835 = weight(_text_:k in 3355) [ClassicSimilarity], result of:
            0.042711835 = score(doc=3355,freq=2.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.23664509 = fieldWeight in 3355, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.058126055 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.058126055 = score(doc=3355,freq=4.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.5 = coord(1/2)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  17. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.05
    0.05018946 = product of:
      0.10037892 = sum of:
        0.10037892 = product of:
          0.4015157 = sum of:
            0.4015157 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.4015157 = score(doc=1826,freq=2.0), product of:
                0.42865068 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050560288 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  18. Gursoy, A.; Wickett, K.; Feinberg, M.: Understanding tag functions in a moderated, user-generated metadata ecosystem (2018) 0.05
    0.04933088 = sum of:
      0.03153428 = product of:
        0.12613712 = sum of:
          0.12613712 = weight(_text_:author's in 3946) [ClassicSimilarity], result of:
            0.12613712 = score(doc=3946,freq=2.0), product of:
              0.33977288 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.050560288 = queryNorm
              0.3712395 = fieldWeight in 3946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3946)
        0.25 = coord(1/4)
      0.017796598 = product of:
        0.035593197 = sum of:
          0.035593197 = weight(_text_:k in 3946) [ClassicSimilarity], result of:
            0.035593197 = score(doc=3946,freq=2.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.19720423 = fieldWeight in 3946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3946)
        0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to investigate tag use in a metadata ecosystem that supports a fan work repository to identify functions of tags and explore the system as a co-constructed communicative context. Design/methodology/approach Using modified techniques from grounded theory (Charmaz, 2007), this paper integrates humanistic and social science methods to identify kinds of tag use in a rich setting. Findings Three primary roles of tags emerge out of detailed study of the metadata ecosystem: tags can identify elements in the fan work, tags can reflect on how those elements are used or adapted in the fan work, and finally, tags can express the fan author's sense of her role in the discursive context of the fan work repository. Attending to each of the tag roles shifts focus away from just what tags say to include how they say it. Practical implications Instead of building metadata systems designed solely for retrieval or description, this research suggests that it may be fruitful to build systems that recognize various metadata functions and allow for expressivity. This research also suggests that attending to metadata previously considered unusable in systems may reflect the participants' sense of the system and their role within it. Originality/value In addition to accommodating a wider range of tag functions, this research implies consideration of metadata ecosystems, where different kinds of tags do different things and work together to create a multifaceted artifact.
  19. Aldebei, K.; He, X.; Jia, W.; Yeh, W.: SUDMAD: Sequential and unsupervised decomposition of a multi-author document based on a hidden markov model (2018) 0.05
    0.04933088 = sum of:
      0.03153428 = product of:
        0.12613712 = sum of:
          0.12613712 = weight(_text_:author's in 4037) [ClassicSimilarity], result of:
            0.12613712 = score(doc=4037,freq=2.0), product of:
              0.33977288 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.050560288 = queryNorm
              0.3712395 = fieldWeight in 4037, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4037)
        0.25 = coord(1/4)
      0.017796598 = product of:
        0.035593197 = sum of:
          0.035593197 = weight(_text_:k in 4037) [ClassicSimilarity], result of:
            0.035593197 = score(doc=4037,freq=2.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.19720423 = fieldWeight in 4037, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4037)
        0.5 = coord(1/2)
    
    Abstract
    Decomposing a document written by more than one author into sentences based on authorship is of great significance due to the increasing demand for plagiarism detection, forensic analysis, civil law (i.e., disputed copyright issues), and intelligence issues that involve disputed anonymous documents. Among existing studies for document decomposition, some were limited by specific languages, according to topics or restricted to a document of two authors, and their accuracies have big room for improvement. In this paper, we consider the contextual correlation hidden among sentences and propose an algorithm for Sequential and Unsupervised Decomposition of a Multi-Author Document (SUDMAD) written in any language, disregarding topics, through the construction of a Hidden Markov Model (HMM) reflecting the authors' writing styles. To build and learn such a model, an unsupervised, statistical approach is first proposed to estimate the initial values of HMM parameters of a preliminary model, which does not require the availability of any information of author's or document's context other than how many authors contributed to writing the document. To further boost the performance of this approach, a boosted HMM learning procedure is proposed next, where the initial classification results are used to create labeled training data to learn a more accurate HMM. Moreover, the contextual relationship among sentences is further utilized to refine the classification results. Our proposed approach is empirically evaluated on three benchmark datasets that are widely used for authorship analysis of documents. Comparisons with recent state-of-the-art approaches are also presented to demonstrate the significance of our new ideas and the superior performance of our approach.
  20. Social Media und Web Science : das Web als Lebensraum, Düsseldorf, 22. - 23. März 2012, Proceedings, hrsg. von Marlies Ockenfeld, Isabella Peters und Katrin Weller. DGI, Frankfurt am Main 2012 (2012) 0.05
    0.04889101 = product of:
      0.09778202 = sum of:
        0.09778202 = sum of:
          0.049830478 = weight(_text_:k in 1517) [ClassicSimilarity], result of:
            0.049830478 = score(doc=1517,freq=2.0), product of:
              0.180489 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.050560288 = queryNorm
              0.27608594 = fieldWeight in 1517, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1517)
          0.047951546 = weight(_text_:22 in 1517) [ClassicSimilarity], result of:
            0.047951546 = score(doc=1517,freq=2.0), product of:
              0.17705351 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050560288 = queryNorm
              0.2708308 = fieldWeight in 1517, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1517)
      0.5 = coord(1/2)
    
    Editor
    Ockenfeld, M., I. Peters u. K. Weller

Languages

  • e 829
  • d 364
  • a 1
  • hu 1
  • More… Less…

Types

  • a 1007
  • m 135
  • el 122
  • s 43
  • x 19
  • r 10
  • b 5
  • n 2
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications