Search (4221 results, page 1 of 212)

  1. Runge, S.: Wege und Möglichkeiten gemeinschaftlicher Sachkatalogisierung (1936) 0.19
    0.19417886 = product of:
      0.38835773 = sum of:
        0.38835773 = sum of:
          0.2775457 = weight(_text_:darstellung in 2007) [ClassicSimilarity], result of:
            0.2775457 = score(doc=2007,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.8572388 = fieldWeight in 2007, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.109375 = fieldNorm(doc=2007)
          0.11081204 = weight(_text_:22 in 2007) [ClassicSimilarity], result of:
            0.11081204 = score(doc=2007,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.5416616 = fieldWeight in 2007, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=2007)
      0.5 = coord(1/2)
    
    Footnote
    Auszug der Darstellung unter gleichem Titel in 'Beiträge zur Sachkatalogisierung'. 1937. S.1-22.
  2. Gardner, M.: Faszinierende Mosaike (1979) 0.19
    0.19417886 = product of:
      0.38835773 = sum of:
        0.38835773 = sum of:
          0.2775457 = weight(_text_:darstellung in 4230) [ClassicSimilarity], result of:
            0.2775457 = score(doc=4230,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.8572388 = fieldWeight in 4230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.109375 = fieldNorm(doc=4230)
          0.11081204 = weight(_text_:22 in 4230) [ClassicSimilarity], result of:
            0.11081204 = score(doc=4230,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.5416616 = fieldWeight in 4230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=4230)
      0.5 = coord(1/2)
    
    Content
    Darin auch eine Darstellung der Penrose-Kacheln mit mehreren Abbildungen
    Source
    Spektrum der Wissenschaft. 1979, H.11, S.22-33
  3. Großjohann, K.: Gathering-, Harvesting-, Suchmaschinen (1996) 0.19
    0.1861104 = product of:
      0.3722208 = sum of:
        0.3722208 = sum of:
          0.23789631 = weight(_text_:darstellung in 3227) [ClassicSimilarity], result of:
            0.23789631 = score(doc=3227,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.73477614 = fieldWeight in 3227, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.09375 = fieldNorm(doc=3227)
          0.13432449 = weight(_text_:22 in 3227) [ClassicSimilarity], result of:
            0.13432449 = score(doc=3227,freq=4.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.6565931 = fieldWeight in 3227, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=3227)
      0.5 = coord(1/2)
    
    Abstract
    Darstellung verschiedener Möglichkeiten zur inhaltlichen Suche im Internet
    Date
    7. 2.1996 22:38:41
    Pages
    22 S
  4. Birmingham, J.: Internet search engines (1996) 0.17
    0.16643903 = product of:
      0.33287805 = sum of:
        0.33287805 = sum of:
          0.23789631 = weight(_text_:darstellung in 5664) [ClassicSimilarity], result of:
            0.23789631 = score(doc=5664,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.73477614 = fieldWeight in 5664, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.09375 = fieldNorm(doc=5664)
          0.094981745 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
            0.094981745 = score(doc=5664,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.46428138 = fieldWeight in 5664, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=5664)
      0.5 = coord(1/2)
    
    Content
    Darstellung zu verschiedenen search engines des Internet
    Date
    10.11.1996 16:36:22
  5. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.17
    0.16643903 = product of:
      0.33287805 = sum of:
        0.33287805 = sum of:
          0.23789631 = weight(_text_:darstellung in 2051) [ClassicSimilarity], result of:
            0.23789631 = score(doc=2051,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.73477614 = fieldWeight in 2051, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.09375 = fieldNorm(doc=2051)
          0.094981745 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
            0.094981745 = score(doc=2051,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.46428138 = fieldWeight in 2051, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=2051)
      0.5 = coord(1/2)
    
    Abstract
    Der Beitrag enthält eine Darstellung zur Frage der Konzeption von Rankingalgorithmen auf Grundlage gewichteter Indexierung mittels statistischer Verfahren.
    Date
    14. 6.2015 22:12:56
  6. Raichle, M.E.: Bildliches Erfassen von kognitiven Prozessen (1994) 0.14
    0.13869919 = product of:
      0.27739838 = sum of:
        0.27739838 = sum of:
          0.19824693 = weight(_text_:darstellung in 7282) [ClassicSimilarity], result of:
            0.19824693 = score(doc=7282,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.61231345 = fieldWeight in 7282, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.078125 = fieldNorm(doc=7282)
          0.07915146 = weight(_text_:22 in 7282) [ClassicSimilarity], result of:
            0.07915146 = score(doc=7282,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.38690117 = fieldWeight in 7282, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=7282)
      0.5 = coord(1/2)
    
    Content
    Darstellung der Positronen Emissions Tomographie (PET) zur Analyse von Hirnfunktionen
    Date
    22. 7.2000 19:11:30
  7. Dreier, T.: Urheberrecht und digitale Werkverwertung : Die aktuelle Lage des Urheberrechts im Zeitalter von Internet und Multimedia (1997) 0.14
    0.13869919 = product of:
      0.27739838 = sum of:
        0.27739838 = sum of:
          0.19824693 = weight(_text_:darstellung in 5469) [ClassicSimilarity], result of:
            0.19824693 = score(doc=5469,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.61231345 = fieldWeight in 5469, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.078125 = fieldNorm(doc=5469)
          0.07915146 = weight(_text_:22 in 5469) [ClassicSimilarity], result of:
            0.07915146 = score(doc=5469,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.38690117 = fieldWeight in 5469, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5469)
      0.5 = coord(1/2)
    
    Abstract
    Gutachten zu den Auswirkungen der neuen Technologien auf das Urheberrecht mit Darstellung des gesetzgebungspolitischen Handlungsbedarfs zum Copyright
    Date
    1. 7.1997 21:02:22
  8. Miksa, S.D.: ¬The challenges of change : a review of cataloging and classification literature, 2003-2004 (2007) 0.13
    0.13227111 = sum of:
      0.100610524 = product of:
        0.30183157 = sum of:
          0.30183157 = weight(_text_:themes in 266) [ClassicSimilarity], result of:
            0.30183157 = score(doc=266,freq=4.0), product of:
              0.37558588 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.05842031 = queryNorm
              0.8036286 = fieldWeight in 266, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0625 = fieldNorm(doc=266)
        0.33333334 = coord(1/3)
      0.031660583 = product of:
        0.063321166 = sum of:
          0.063321166 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
            0.063321166 = score(doc=266,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.30952093 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=266)
        0.5 = coord(1/2)
    
    Abstract
    This paper reviews the enormous changes in cataloging and classification reflected in the literature of 2003 and 2004, and discusses major themes and issues. Traditional cataloging and classification tools have been re-vamped and new resources have emerged. Most notable themes are: the continuing influence of the Functional Requirements for Bibliographic Control (FRBR); the struggle to understand the ever-broadening concept of an "information entity"; steady developments in metadata-encoding standards; and the globalization of information systems, including multilinguistic challenges.
    Date
    10. 9.2000 17:38:22
  9. Gnoli, C.: Classifying phenomena : part 4: themes and rhemes (2018) 0.13
    0.13045901 = sum of:
      0.10671358 = product of:
        0.32014072 = sum of:
          0.32014072 = weight(_text_:themes in 4152) [ClassicSimilarity], result of:
            0.32014072 = score(doc=4152,freq=8.0), product of:
              0.37558588 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.05842031 = queryNorm
              0.8523769 = fieldWeight in 4152, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.046875 = fieldNorm(doc=4152)
        0.33333334 = coord(1/3)
      0.023745436 = product of:
        0.047490872 = sum of:
          0.047490872 = weight(_text_:22 in 4152) [ClassicSimilarity], result of:
            0.047490872 = score(doc=4152,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.23214069 = fieldWeight in 4152, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4152)
        0.5 = coord(1/2)
    
    Abstract
    This is the fourth in a series of papers on classification based on phenomena instead of disciplines. Together with types, levels and facets that have been discussed in the previous parts, themes and rhemes are further structural components of such a classification. In a statement or in a longer document, a base theme and several particular themes can be identified. Base theme should be cited first in a classmark, followed by particular themes, each with its own facets. In some cases, rhemes can also be expressed, that is new information provided about a theme, converting an abstract statement ("wolves, affected by cervids") into a claim that some thing actually occurs ("wolves are affected by cervids"). In the Integrative Levels Classification rhemes can be expressed by special deictic classes, including those for actual specimens, anaphoras, unknown values, conjunctions and spans, whole universe, anthropocentric favoured classes, and favoured host classes. These features, together with rules for pronounciation, make a classification of phenomena a true language, that may be suitable for many uses.
    Date
    17. 2.2018 18:22:25
  10. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.13
    0.12688419 = sum of:
      0.07732245 = product of:
        0.23196736 = sum of:
          0.23196736 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
            0.23196736 = score(doc=5895,freq=2.0), product of:
              0.49528804 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05842031 = queryNorm
              0.46834838 = fieldWeight in 5895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5895)
        0.33333334 = coord(1/3)
      0.04956173 = product of:
        0.09912346 = sum of:
          0.09912346 = weight(_text_:darstellung in 5895) [ClassicSimilarity], result of:
            0.09912346 = score(doc=5895,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.30615672 = fieldWeight in 5895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5895)
        0.5 = coord(1/2)
    
    Abstract
    Das Problem der Wahrnehmung und Darstellung von Wahrheit durch die Medien führt zu vier zentralen Fragen: Wie viel Wahrheit gibt es in der Welt, über die Journalisten berichten müssen? Wie ermittelt oder recherchiert man diese Wahrheit? Wie trennt man die Spreu vom Weizen? Und wie geht man als Journalist mit dem um, was man als Wahrheit erkannt hat oder erkannt zu haben glaubt? Hier gibt es ganz offensichtlich eine Parallele zwischen Journalisten und Wissenschaftlern. Journalisten und Wissenschaftler brauchen erstens Hypothesen, zweitens geeignete Hypothesentests, drittens ein gutes Abgrenzungs-Kriterium und viertens Verfahren, um die erkannten Sachverhalte auf angemessene Weise für eine Kommunikation mit anderen zu repräsentieren, das heißt sie darzustellen. Es gibt zwei große Unterschiede zwischen Journalisten und Wissenschaftlern: Journalisten sind in der Regel auf raum-zeitlich begrenzte Aussagen aus, Wissenschaftler in der Regel auf raumzeitlich unbegrenzte Gesetze. Aber diese Unterschiede sind fließend, weil Wissenschaftler raum-zeitlich begrenzte Aussagen brauchen, um ihre All-Aussagen zu überprüfen, und Journalisten sich immer häufiger auf das Feld der allgemeinen Gesetzes-Aussagen wagen oder doch zumindest Kausalinterpretationen für soziale Phänomene anbieten. Der zweite Unterschied besteht darin, dass die Wissenschaft weitgehend professionalisiert ist (zumindest gilt dies uneingeschränkt für die Naturwissenschaften und die Medizin), was ihr relativ klare Abgrenzungs- und Güte-Kriterien beschert hat. Diese fehlen weitgehend im Journalismus.
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  11. Trunk, D.: Semantische Netze in Informationssystemen : Verbesserung der Suche durch Interaktion und Visualisierung (2005) 0.13
    0.12583023 = product of:
      0.25166047 = sum of:
        0.25166047 = sum of:
          0.19625445 = weight(_text_:darstellung in 2500) [ClassicSimilarity], result of:
            0.19625445 = score(doc=2500,freq=4.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.6061594 = fieldWeight in 2500, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2500)
          0.05540602 = weight(_text_:22 in 2500) [ClassicSimilarity], result of:
            0.05540602 = score(doc=2500,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.2708308 = fieldWeight in 2500, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2500)
      0.5 = coord(1/2)
    
    Abstract
    Semantische Netze unterstützen den Suchvorgang im Information Retrieval. Sie bestehen aus relationierten Begriffen und helfen dem Nutzer das richtige Vokabular zur Fragebildung zu finden. Eine leicht und intuitiv erfassbare Darstellung und eine interaktive Bedienungsmöglichkeit optimieren den Suchprozess mit der Begriffsstruktur. Als Interaktionsform bietet sich Hy-pertext mit dem etablierte Point- und Klickverfahren an. Eine Visualisierung zur Unterstützung kognitiver Fähigkeiten kann durch eine Darstellung der Informationen mit Hilfe von Punkten und Linien erfolgen. Vorgestellt wer-den die Anwendungsbeispiele Wissensnetz im Brockhaus multimedial, WordSurfer der Firma BiblioMondo, SpiderSearch der Firma BOND und Topic Maps Visualization in dandelon.com und im Portal Informationswis-senschaft der Firma AGI - Information Management Consultants.
    Date
    30. 1.2007 18:22:41
  12. Uhl, M.: Medien - Gehirn - Evolution : Mensch und Medienkultur verstehen ; eine transdisziplinäre Medienanthropologie (2009) 0.12
    0.118911326 = product of:
      0.23782265 = sum of:
        0.23782265 = sum of:
          0.19824693 = weight(_text_:darstellung in 510) [ClassicSimilarity], result of:
            0.19824693 = score(doc=510,freq=8.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.61231345 = fieldWeight in 510, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.0390625 = fieldNorm(doc=510)
          0.03957573 = weight(_text_:22 in 510) [ClassicSimilarity], result of:
            0.03957573 = score(doc=510,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.19345059 = fieldWeight in 510, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=510)
      0.5 = coord(1/2)
    
    Classification
    LC 50000: Darstellung ohne geografischen Bezug / Ethnologie / Kunst und Wissen
    LC 13000: Darstellung ohne geografischen Bezug / Ethnologie / Materielle Kultur und Wirtschaftsethnologie
    Date
    12. 2.2022 17:28:22
    RVK
    LC 50000: Darstellung ohne geografischen Bezug / Ethnologie / Kunst und Wissen
    LC 13000: Darstellung ohne geografischen Bezug / Ethnologie / Materielle Kultur und Wirtschaftsethnologie
  13. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.12
    0.11653237 = sum of:
      0.09278694 = product of:
        0.2783608 = sum of:
          0.2783608 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2783608 = score(doc=562,freq=2.0), product of:
              0.49528804 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05842031 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.023745436 = product of:
        0.047490872 = sum of:
          0.047490872 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.047490872 = score(doc=562,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  14. Grivel, L.; Mutschke, P.; Polanco, X.: Thematic mapping on bibliographic databases by cluster analysis : a description of the SDOC environment with SOLIS (1995) 0.12
    0.11573722 = sum of:
      0.08803421 = product of:
        0.26410264 = sum of:
          0.26410264 = weight(_text_:themes in 1900) [ClassicSimilarity], result of:
            0.26410264 = score(doc=1900,freq=4.0), product of:
              0.37558588 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.05842031 = queryNorm
              0.70317507 = fieldWeight in 1900, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1900)
        0.33333334 = coord(1/3)
      0.02770301 = product of:
        0.05540602 = sum of:
          0.05540602 = weight(_text_:22 in 1900) [ClassicSimilarity], result of:
            0.05540602 = score(doc=1900,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.2708308 = fieldWeight in 1900, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1900)
        0.5 = coord(1/2)
    
    Abstract
    The paper presents a coword-analysis-based system called SDOC which is able to pupport the intellectual work of an end-user who is searching for information in a bibliographic database. This is done by presenting its thematical structure as a map of keyword clusters (themes) on a graphical user interface. These mapping facilities are demonstrated on the basis of the research field Social History given by a set of documents from the social science literature database SOLIS. Besides the traditional way of analysing a coword map as a strategic diagram, the notion of cluster relationships analysis is introduced which provides an adequate interpretation of links between themes
    Source
    Knowledge organization. 22(1995) no.2, S.70-77
  15. Lin, X.; Li, J.; Zhou, X.: Theme creation for digital collections (2008) 0.12
    0.11573722 = sum of:
      0.08803421 = product of:
        0.26410264 = sum of:
          0.26410264 = weight(_text_:themes in 2635) [ClassicSimilarity], result of:
            0.26410264 = score(doc=2635,freq=4.0), product of:
              0.37558588 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.05842031 = queryNorm
              0.70317507 = fieldWeight in 2635, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2635)
        0.33333334 = coord(1/3)
      0.02770301 = product of:
        0.05540602 = sum of:
          0.05540602 = weight(_text_:22 in 2635) [ClassicSimilarity], result of:
            0.05540602 = score(doc=2635,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.2708308 = fieldWeight in 2635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2635)
        0.5 = coord(1/2)
    
    Abstract
    This paper presents an approach for integrating multiple sources of semantics for the creating metadata. A new framework is proposed to define topics and themes with both manually and automatically generated terms. The automatically generated terms include: terms from a semantic analysis of the collections and terms from previous user's queries. An interface is developed to facilitate the creation and use of such topics and themes for metadata creation. The framework and the interface promote human-computer collaboration in metadata creation. Several principles underlying such approach are also discussed.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  16. Manecke, H.-J.: Verbesserung der Indexierungsergebnisse durch fachgebietsbezogene Indexierregeln (1974) 0.11
    0.11095935 = product of:
      0.2219187 = sum of:
        0.2219187 = sum of:
          0.15859754 = weight(_text_:darstellung in 1786) [ClassicSimilarity], result of:
            0.15859754 = score(doc=1786,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.48985076 = fieldWeight in 1786, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.0625 = fieldNorm(doc=1786)
          0.063321166 = weight(_text_:22 in 1786) [ClassicSimilarity], result of:
            0.063321166 = score(doc=1786,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.30952093 = fieldWeight in 1786, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1786)
      0.5 = coord(1/2)
    
    Abstract
    Der Autor gibt eine Begründung der Notwendigkeit und eine Darstellung des Schemas zur Ableitung fachspezifischer Indexierregeln, ausgehend von statistischen Untersuchungen zur Häufigkeitsverteilung verschiedener Kategorien des Informationsaufkommens und verschiedener Typen von Veröffentlichungen auf dem untersuchten Fachgebiet. Er nennt Voraussetzungen, die erfüllt werden müssen, bevor das an einigen Beispielen aus dem Fachgebiet Schiffbau dargestellte Schema als allgemeingültig angesehen werden kann. Beispiele fachspezifischer Indexierregeln für Erzeugnisbeschreibungen werden angeführt
    Source
    Informatik. 21(1974), S.22-26
  17. Hammwöhner, R.: Offene Hypertextsysteme : Das Konstanzer Hypertextsystem (KHS) im wissenschaftlichen und technischen Kontext (1997) 0.11
    0.11095935 = product of:
      0.2219187 = sum of:
        0.2219187 = sum of:
          0.15859754 = weight(_text_:darstellung in 1508) [ClassicSimilarity], result of:
            0.15859754 = score(doc=1508,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.48985076 = fieldWeight in 1508, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.0625 = fieldNorm(doc=1508)
          0.063321166 = weight(_text_:22 in 1508) [ClassicSimilarity], result of:
            0.063321166 = score(doc=1508,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.30952093 = fieldWeight in 1508, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1508)
      0.5 = coord(1/2)
    
    Abstract
    Offene Hypertexte sind komplexe technosoziale Systeme, die zur Zeit in der Ausprägung des WWW im Begriff sind, die weltweite Informationslandschaft durchgreifend zu verändern. Ziel dieser Arbeit ist es, den Entwurf eines offenen Hypertextsystems, des KHS, vorzustellen und an Anwendungsbeipsielen zu bestätigen. Das schließt eine ausführliche Darstellung des zugrundeliegenden objektorientierten Hypertextmodells ein, das vergleichend zu anderen in der Literatur Hypertextmodellen vorgestellt wird
    Date
    17. 7.2002 16:22:13
  18. Neurowissenschaften : eine Einführung (1996) 0.11
    0.11095935 = product of:
      0.2219187 = sum of:
        0.2219187 = sum of:
          0.15859754 = weight(_text_:darstellung in 4933) [ClassicSimilarity], result of:
            0.15859754 = score(doc=4933,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.48985076 = fieldWeight in 4933, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.0625 = fieldNorm(doc=4933)
          0.063321166 = weight(_text_:22 in 4933) [ClassicSimilarity], result of:
            0.063321166 = score(doc=4933,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.30952093 = fieldWeight in 4933, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4933)
      0.5 = coord(1/2)
    
    Abstract
    Das Werk ist eine Weiterentwicklung von 'Principles of neural science', der 'Bibel der Neurowissenschaften', aus dem die Herausgeber jetzt ein echtes Einsteiger-Lehrbuch konzipiert haben. Diese erste Integrative Darstellung nicht nur der biologischen Grundlagen, sondern auch der interdisziplinären Aspekte der modernen Hirnforschung (kognitions- und verhaltenswissenschaftfliche, neurologisch-psychiatrische und neuropsychologische Erkenntnisse) macht den Einstieg in die Hirnforschung attraktiv und leicht
    Date
    22. 7.2000 18:42:01
  19. Becker, F.: Internet-Suchmaschinen : Funktionsweise und Beurteilung (1999) 0.11
    0.11095935 = product of:
      0.2219187 = sum of:
        0.2219187 = sum of:
          0.15859754 = weight(_text_:darstellung in 1770) [ClassicSimilarity], result of:
            0.15859754 = score(doc=1770,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.48985076 = fieldWeight in 1770, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.0625 = fieldNorm(doc=1770)
          0.063321166 = weight(_text_:22 in 1770) [ClassicSimilarity], result of:
            0.063321166 = score(doc=1770,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.30952093 = fieldWeight in 1770, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1770)
      0.5 = coord(1/2)
    
    Abstract
    Nach einer kurzen Darstellung der Entwicklung der Suchdienste wird das Suchretrieval von AltaVista näher beschrieben. Es werden Kriterien für die Beurteilung der Suchdienste festgelegt. Danach folgt die Beschreibung der Möglichkeiten der einzelnen Suchdienste und ihre Beurteilung. Zum Schluß wird über die Auswahl eines Suchdienstes diskutiert.
    Date
    22. 3.2008 14:04:11
  20. Franzmeier, G.: ¬Die Zeitschriftendatenbank (ZDB) (2001) 0.11
    0.11095935 = product of:
      0.2219187 = sum of:
        0.2219187 = sum of:
          0.15859754 = weight(_text_:darstellung in 1777) [ClassicSimilarity], result of:
            0.15859754 = score(doc=1777,freq=2.0), product of:
              0.32376707 = queryWeight, product of:
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.05842031 = queryNorm
              0.48985076 = fieldWeight in 1777, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.542029 = idf(docFreq=470, maxDocs=44218)
                0.0625 = fieldNorm(doc=1777)
          0.063321166 = weight(_text_:22 in 1777) [ClassicSimilarity], result of:
            0.063321166 = score(doc=1777,freq=2.0), product of:
              0.20457798 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05842031 = queryNorm
              0.30952093 = fieldWeight in 1777, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1777)
      0.5 = coord(1/2)
    
    Abstract
    Nach einer kurzen Darstellung der Entstehungsgeschichte und der Grundideen, nach denen die ZDB aufgebaut wurde und die sie noch heute bestimmen, beschreibt der Beitrag ausführlicher die derzeitige Situation der ZDB unter den drei Gesichtspunkten des Verbundsystems, der zugrundeliegenden Datenbank und der Produkte aus dieser Datenbank für die Endnutzer. Anschließend werden unter ,Perspektiven' Fragen des inhaltlichen Ausbaus der ZDB, der besonderen Anforderungen durch die elektronischen Zeitschriften, der Verbesserung der Online-Kommunikation, der Anbindung von Aufsatz-Datenbanken und einige weitere Aufgaben behandelt.
    Date
    22. 3.2008 13:56:55

Languages

Types

  • a 3380
  • m 506
  • el 197
  • s 168
  • x 74
  • b 40
  • i 29
  • r 21
  • ? 8
  • d 7
  • n 4
  • p 4
  • u 2
  • z 2
  • au 1
  • h 1
  • l 1
  • vi 1
  • More… Less…

Themes

Subjects

Classifications