Search (1515 results, page 2 of 76)

  • × language_ss:"e"
  • × year_i:[2010 TO 2020}
  1. Lucarelli, A.; Viti, E.: Florence-Washington round trip : ways and intersections between semantic indexing tools in different languages (2015) 0.05
    0.0456195 = product of:
      0.06842925 = sum of:
        0.027027493 = weight(_text_:im in 1886) [ClassicSimilarity], result of:
          0.027027493 = score(doc=1886,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.18739122 = fieldWeight in 1886, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.046875 = fieldNorm(doc=1886)
        0.04140175 = product of:
          0.062102623 = sum of:
            0.031153653 = weight(_text_:online in 1886) [ClassicSimilarity], result of:
              0.031153653 = score(doc=1886,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 1886, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1886)
            0.03094897 = weight(_text_:retrieval in 1886) [ClassicSimilarity], result of:
              0.03094897 = score(doc=1886,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20052543 = fieldWeight in 1886, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1886)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Theme
    Verbale Doksprachen im Online-Retrieval
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.05
    0.04518678 = product of:
      0.06778017 = sum of:
        0.054025065 = product of:
          0.16207519 = sum of:
            0.16207519 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.16207519 = score(doc=5820,freq=2.0), product of:
                0.43257114 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051022716 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.013755098 = product of:
          0.041265294 = sum of:
            0.041265294 = weight(_text_:retrieval in 5820) [ClassicSimilarity], result of:
              0.041265294 = score(doc=5820,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 5820, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.05
    0.04502089 = product of:
      0.13506266 = sum of:
        0.13506266 = product of:
          0.405188 = sum of:
            0.405188 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.405188 = score(doc=1826,freq=2.0), product of:
                0.43257114 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051022716 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  4. Chu, H.: Information representation and retrieval in the digital age (2010) 0.04
    0.04444052 = product of:
      0.06666078 = sum of:
        0.022067856 = weight(_text_:im in 92) [ClassicSimilarity], result of:
          0.022067856 = score(doc=92,freq=12.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.15300429 = fieldWeight in 92, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.015625 = fieldNorm(doc=92)
        0.044592917 = product of:
          0.066889375 = sum of:
            0.01038455 = weight(_text_:online in 92) [ClassicSimilarity], result of:
              0.01038455 = score(doc=92,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.067062475 = fieldWeight in 92, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.015625 = fieldNorm(doc=92)
            0.056504827 = weight(_text_:retrieval in 92) [ClassicSimilarity], result of:
              0.056504827 = score(doc=92,freq=60.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.36610767 = fieldWeight in 92, product of:
                  7.745967 = tf(freq=60.0), with freq of:
                    60.0 = termFreq=60.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.015625 = fieldNorm(doc=92)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Content
    Information representation and retrieval : an overview -- Information representation I : basic approaches -- Information representation II : related topics -- Language in information representation and retrieval -- Retrieval techniques and query representation -- Retrieval approaches -- Information retrieval models -- Information retrieval systems -- Retrieval of information unique in content or format -- The user dimension in information representation and retrieval -- Evaluation of information representation and retrieval -- Artificial intelligence in information representation and retrieval.
    Footnote
    Rez. in: JASIST 56(2005) no.2, S.215-216 (A. Heath): "What is small, thoroughly organized, and easy to understand? Well, it's Heting Chu's latest book an information retrieval. A very welcome release, this small literary addition to the field (only 248 pages) contains a concise and weIl-organized discussion of every major topic in information retrieval. The often-complex field of information retrieval is presented from its origin in the early 1950s to the present day. The organization of this text is top-notch, thus making this an easy read for even the novice. Unlike other titles in this area, Chu's user-friendly style of writing is done an purpose to properly introduce newcomers to the field in a less intimidating way. As stated by the author in the Preface, the purpose of the book is to "present a systematic, thorough yet nontechnical view of the field by using plain language to explain complex subjects." Chu has definitely struck up the right combination of ingredients. In a field so broad and complex, a well-organized presentation of topics that don't trip an themselves is essential. The use of plain language where possible is also a good choice for this topic because it allows one to absorb topics that are, by nature, not as easy to grasp. For instance, Chapters 6 and 7, which cover retrieval approaches and techniques, an often painstaking topic for many students and teachers is deftly handled with the use of tables that can be used to compare and contrast the various models discussed. I particularly loved Chu's use of Koll's 2000 article from the Bulletin of the American Society for Information Science to explain subject searching at the beginning of Chapter 6, which discusses the differences between browsing and searching. The Koll article uses the task of finding a needle in a haystack as an analogy.
    Chu's intent with this book is clear throughout the entire text. With this presentation, she writes with the novice in mind or as she puls it in the Preface, "to anyone who is interested in learning about the field, particularly those who are new to it." After reading the text, I found that this book is also an appropriate reference book for those who are somewhat advanced in the field. I found the chapters an information retrieval models and techniques, metadata, and AI very informative in that they contain information that is often rather densely presented in other texts. Although, I must say, the metadata section in Chapter 3 is pretty basic and contains more questions about the area than information. . . . It is an excellent book to have in the classroom, an your bookshelf, etc. It reads very well and is written with the reader in mind. If you are in need of a more advanced or technical text an the subject, this is not the book for you. But, if you are looking for a comprehensive, manual that can be used as a "flip-through," then you are in luck."
    Weitere Rez. in: Rez. in: nfd 55(2004) H.4, S.252 (D. Lewandowski):"Die Zahl der Bücher zum Thema Information Retrieval ist nicht gering, auch in deutscher Sprache liegen einige Titel vor. Trotzdem soll ein neues (englischsprachiges) Buch zu diesem Thema hier besprochen werden. Dieses zeichnet sich durch eine Kürze (nur etwa 230 Seiten Text) und seine gute Verständlichkeit aus und richtet sich damit bevorzugt an Studenten in den ersten Semestern. Heting Chu unterrichtet seit 1994 an Palmer School of Library and Information Science der Long Island University New York. Dass die Autorin viel Erfahrung in der Vermittlung des Stoffs in ihren Information-Retrieval-Veranstaltungen sammeln konnte, merkt man dem Buch deutlich an. Es ist einer klaren und verständlichen Sprache geschrieben und führt in die Grundlagen der Wissensrepräsentation und des Information Retrieval ein. Das Lehrbuch behandelt diese Themen als Gesamtkomplex und geht damit über den Themenbereich ähnlicher Bücher hinaus, die sich in der Regel auf das Retrieval beschränken. Das Buch ist in zwölf Kapitel gegliedert, wobei das erste Kapitel eine Übersicht über die zu behandelnden Themen gibt und den Leser auf einfache Weise in die Grundbegriffe und die Geschichte des IRR einführt. Neben einer kurzen chronologischen Darstellung der Entwicklung der IRR-Systeme werden auch vier Pioniere des Gebiets gewürdigt: Mortimer Taube, Hans Peter Luhn, Calvin N. Mooers und Gerard Salton. Dies verleiht dem von Studenten doch manchmal als trocken empfundenen Stoff eine menschliche Dimension. Das zweite und dritte Kapitel widmen sich der Wissensrepräsentation, wobei zuerst die grundlegenden Ansätze wie Indexierung, Klassifikation und Abstracting besprochen werden. Darauf folgt die Behandlung von Wissensrepräsentation mittels Metadaten, wobei v.a. neuere Ansätze wie Dublin Core und RDF behandelt werden. Weitere Unterkapitel widmen sich der Repräsentation von Volltexten und von Multimedia-Informationen. Die Stellung der Sprache im IRR wird in einem eigenen Kapitel behandelt. Dabei werden in knapper Form verschiedene Formen des kontrollierten Vokabulars und die wesentlichen Unterscheidungsmerkmale zur natürlichen Sprache erläutert. Die Eignung der beiden Repräsentationsmöglichkeiten für unterschiedliche IRR-Zwecke wird unter verschiedenen Aspekten diskutiert.
    Die Kapitel fünf bis neun widmen sich dann ausführlich dem Information Retrieval. Zuerst werden grundlegende Retrievaltechniken vorgestellt und ihre Vor- und Nachteile dargestellt. Aus der Sicht des Nutzers von IR-Systemen wird der Prozess des Bildens einer Suchanfrage diskutiert und die damit verbundenen Probleme aufgezeigt. Im sechsten Kapitel werden die Retrieval-Ansätze Suchen und Browsen gegenübergestellt, entsprechende Suchstrategien aufgezeigt und schließlich Ansätze diskutiert, die suchen und browsen zu integrieren versuchen. Das siebte Kapitel beschäftigt sich dann mit dem, was den Kern der meisten IRBücher ausmacht: den IR-Modellen. Diese werden kurz vorgestellt, auf Formeln wird weitgehend verzichtet. Dies ist jedoch durchaus als Vorteil zu sehen, denn gerade Studienanfängern bereitet das Verständnis der IR-Modelle aufgrund deren Komplexität oft Schwierigkeiten. Nach der Lektüre dieses Kapitels wird man zwar nicht im Detail über die verschiedenen Modelle Bescheid wissen, wird sie jedoch kennen und einordnen können.
    In Kapitel acht werden unterschiedliche Arten von IR-Systemen vorgestellt. Dies sind Online IR-Systeme, CD-ROM-Systeme, OPACs und Internet IR-Systeme, denen der Grossteil dieses Kapitels gewidmet ist. Zu jeder Art von System werden die historische Entwicklung und die Besonderheiten genannt. Bei den Internet-IR-Systemen wird ausführlich auf die besonderen Probleme, die bei diesen im Vergleich zu klassischen IR-Systemen auftauchen, eingegangen. Ein extra Kapitel behandelt die Besonderheiten des Retrievals bei besonderen Dokumentkollektionen und besonderen Formaten. Hier finden sich Informationen zum multilingualen Retrieval und zur Suche nach Multimedia-Inhalten, wobei besonders auf die Unterscheidung zwischen beschreibungs- und inhaltsbasiertem Ansatz der Erschließung solcher Inhalte eingegangen wird. In Kapitel zehn erfährt der Leser mehr über die Stellung des Nutzers in IRR-Prozessen. Die Autorin stellt verschiedene Arten von Suchinterfaces bzw. Benutzeroberflächen und Ansätze der Evaluation der Mensch-Maschine-Interaktion in solchen Systemen vor. Kapitel elf beschäftigt sich ausführlich mit der Evaluierung von IRR-Systemen und stellt die bedeutendsten Test (Cranfield und TREC) vor Ein kurzes abschließendes Kapitel behandelt Ansätze der künstlichen Intelligenz und ihre Anwendung bei IRR-Systemen. Der Aufbau, die knappe, aber dennoch präzise Behandlung des Themas sowie die verständliche Sprache machen dieses Buch zu eine sehr guten Einführung für Studenten in den ersten Semestern, die der englischen Sprache mächtig sind. Besonders positiv hervorzuheben ist die Behandlung auch der aktuellen Themen des IRR wie der Einsatz von Metadaten, die Behandlung von Multimedia-Informationen und der Schwerpunk bei den Internet-IR-Systemen.
    Leider gibt es in deutscher Sprache keinen vergleichbaren Titel. Das Information-Retrieval-Buch von Ferber (2003) ist eher mathematisch orientiert und dürfte Studienanfänger der Informationswissenschaft durch seine große Detailliertheit und der damit einhergehenden großen Anzahl von Formeln eher abschrecken. Es ist eher denjenigen empfohlen, die sich intensiver mit dem Thema beschäftigen möchten. Ähnlich verhält es sich mit dem von manchen gerne genutzten Skript von Fuhr. Das Buch von Gaus (2003) ist mittlerweile schon ein Klassiker, beschäftigt sich aber im wesentlichen mit der Wissensrepräsentation und bietet zudem wenig Aktuelles. So fehlen etwa die Themen Information Retrieval im Internet und Multimedia-Retrieval komplett. Auch die Materialsammlung von Poetzsch (2002) konzentriert sich auf IR in klassischen Datenbanken und strebt zudem auch keine systematische Darstellung des Gebiets an. Zu wünschen wäre also, dass das hier besprochene Buch auch hierzulande in der Lehre Verwendung finden würde, da es den Studierenden einen knappen, gut lesbaren Einblick in das Themengebiet gibt. Es sollte aufgrund der vorbildlichen Aufbereitung des Stoffs auch Vorbild für zukünftige Autoren von Lehrbüchern sein. Und letztlich würde sich der Rezensent eine deutsche Übersetzung dieses Bandes wünschen."
    LCSH
    Information retrieval
    Information storage and retrieval systems
    Subject
    Information retrieval
    Information storage and retrieval systems
  5. Munkelt, J.: Erstellung einer DNB-Retrieval-Testkollektion (2018) 0.04
    0.044433918 = product of:
      0.066650875 = sum of:
        0.054615162 = weight(_text_:im in 4310) [ClassicSimilarity], result of:
          0.054615162 = score(doc=4310,freq=6.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.37866634 = fieldWeight in 4310, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4310)
        0.012035711 = product of:
          0.03610713 = sum of:
            0.03610713 = weight(_text_:retrieval in 4310) [ClassicSimilarity], result of:
              0.03610713 = score(doc=4310,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23394634 = fieldWeight in 4310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4310)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Seit Herbst 2017 findet in der Deutschen Nationalbibliothek die Inhaltserschließung bestimmter Medienwerke rein maschinell statt. Die Qualität dieses Verfahrens, das die Prozessorganisation von Bibliotheken maßgeblich prägen kann, wird unter Fachleuten kontrovers diskutiert. Ihre Standpunkte werden zunächst hinreichend erläutert, ehe die Notwendigkeit einer Qualitätsprüfung des Verfahrens und dessen Grundlagen dargelegt werden. Zentraler Bestandteil einer künftigen Prüfung ist eine Testkollektion. Ihre Erstellung und deren Dokumentation steht im Fokus dieser Arbeit. In diesem Zusammenhang werden auch die Entstehungsgeschichte und Anforderungen an gelungene Testkollektionen behandelt. Abschließend wird ein Retrievaltest durchgeführt, der die Einsatzfähigkeit der erarbeiteten Testkollektion belegt. Seine Ergebnisse dienen ausschließlich der Funktionsüberprüfung. Eine Qualitätsbeurteilung maschineller Inhaltserschließung im Speziellen sowie im Allgemeinen findet nicht statt und ist nicht Ziel der Ausarbeitung.
  6. Pirmann, C.: Tags in the catalogue : insights from a usability study of LibraryThing for libraries (2012) 0.04
    0.04279561 = product of:
      0.06419341 = sum of:
        0.022522911 = weight(_text_:im in 5570) [ClassicSimilarity], result of:
          0.022522911 = score(doc=5570,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.15615936 = fieldWeight in 5570, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5570)
        0.041670498 = product of:
          0.062505744 = sum of:
            0.036714934 = weight(_text_:online in 5570) [ClassicSimilarity], result of:
              0.036714934 = score(doc=5570,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23710167 = fieldWeight in 5570, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5570)
            0.025790809 = weight(_text_:retrieval in 5570) [ClassicSimilarity], result of:
              0.025790809 = score(doc=5570,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 5570, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5570)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Library of Congress Subject Headings (LCSH), the standard subject language used in library catalogues, are often criticized for their lack of currency, biased language, and atypical syndetic structure. Conversely, folksonomies (or tags), which rely on the natural language of their users, offer a flexibility often lacking in controlled vocabularies and may offer a means of augmenting more rigid controlled vocabularies such as LCSH. Content analysis studies have demonstrated the potential for folksonomies to be used as a means of enhancing subject access to materials, and libraries are beginning to integrate tagging systems into their catalogues. This study examines the utility of tags as a means of enhancing subject access to materials in library online public access catalogues (OPACs) through usability testing with the LibraryThing for Libraries catalogue enhancements. Findings indicate that while they cannot replace LCSH, tags do show promise for aiding information seeking in OPACs. In the context of information systems design, the study revealed that while folksonomies have the potential to enhance subject access to materials, that potential is severely limited by the current inability of catalogue interfaces to support tag-based searches alongside standard catalogue searches.
    Theme
    Verbale Doksprachen im Online-Retrieval
  7. Slavic, A.: Classification revisited : a web of knowledge (2011) 0.04
    0.0427642 = product of:
      0.0641463 = sum of:
        0.022522911 = weight(_text_:im in 12) [ClassicSimilarity], result of:
          0.022522911 = score(doc=12,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.15615936 = fieldWeight in 12, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0390625 = fieldNorm(doc=12)
        0.04162339 = product of:
          0.062435087 = sum of:
            0.025961377 = weight(_text_:online in 12) [ClassicSimilarity], result of:
              0.025961377 = score(doc=12,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 12, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=12)
            0.03647371 = weight(_text_:retrieval in 12) [ClassicSimilarity], result of:
              0.03647371 = score(doc=12,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23632148 = fieldWeight in 12, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=12)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Source
    Innovations in information retrieval: perspectives for theory and practice. Eds.: A. Foster, u. P. Rafferty
    Theme
    Klassifikationssysteme im Online-Retrieval
  8. Wissensspeicher in digitalen Räumen : Nachhaltigkeit, Verfügbarkeit, semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008 (2010) 0.04
    0.042265397 = product of:
      0.06339809 = sum of:
        0.031208664 = weight(_text_:im in 774) [ClassicSimilarity], result of:
          0.031208664 = score(doc=774,freq=6.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.21638076 = fieldWeight in 774, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.03125 = fieldNorm(doc=774)
        0.03218943 = product of:
          0.048284143 = sum of:
            0.020632647 = weight(_text_:retrieval in 774) [ClassicSimilarity], result of:
              0.020632647 = score(doc=774,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13368362 = fieldWeight in 774, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=774)
            0.027651496 = weight(_text_:22 in 774) [ClassicSimilarity], result of:
              0.027651496 = score(doc=774,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.15476047 = fieldWeight in 774, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=774)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Content
    Inhalt: A. Grundsätzliche Fragen (aus dem Umfeld) der Wissensorganisation Markus Gottwald, Matthias Klemm und Jan Weyand: Warum ist es schwierig, Wissen zu managen? Ein soziologischer Deutungsversuch anhand eines Wissensmanagementprojekts in einem Großunternehmen H. Peter Ohly: Wissenskommunikation und -organisation. Quo vadis? Helmut F. Spinner: Wissenspartizipation und Wissenschaftskommunikation in drei Wissensräumen: Entwurf einer integrierten Theorie B. Dokumentationssprachen in der Anwendung Felix Boteram: Semantische Relationen in Dokumentationssprachen vom Thesaurus zum semantischen Netz Jessica Hubrich: Multilinguale Wissensorganisation im Zeitalter der Globalisierung: das Projekt CrissCross Vivien Petras: Heterogenitätsbehandlung und Terminology Mapping durch Crosskonkordanzen - eine Fallstudie Manfred Hauer, Uwe Leissing und Karl Rädler: Query-Expansion durch Fachthesauri Erfahrungsbericht zu dandelon.com, Vorarlberger Parlamentsinformationssystem und vorarlberg.at
    C. Begriffsarbeit in der Wissensorganisation Ingetraut Dahlberg: Begriffsarbeit in der Wissensorganisation Claudio Gnoli, Gabriele Merli, Gianni Pavan, Elisabetta Bernuzzi, and Marco Priano: Freely faceted classification for a Web-based bibliographic archive The BioAcoustic Reference Database Stefan Hauser: Terminologiearbeit im Bereich Wissensorganisation - Vergleich dreier Publikationen anhand der Darstellung des Themenkomplexes Thesaurus Daniel Kless: Erstellung eines allgemeinen Standards zur Wissensorganisation: Nutzen, Möglichkeiten, Herausforderungen, Wege D. Kommunikation und Lernen Gerald Beck und Simon Meissner: Strukturierung und Vermittlung von heterogenen (Nicht-)Wissensbeständen in der Risikokommunikation Angelo Chianese, Francesca Cantone, Mario Caropreso, and Vincenzo Moscato: ARCHAEOLOGY 2.0: Cultural E-Learning tools and distributed repositories supported by SEMANTICA, a System for Learning Object Retrieval and Adaptive Courseware Generation for e-learning environments Sonja Hierl, Lydia Bauer, Nadja Böller und Josef Herget: Kollaborative Konzeption von Ontologien in der Hochschullehre: Theorie, Chancen und mögliche Umsetzung Marc Wilhelm Küster, Christoph Ludwig, Yahya Al-Haff und Andreas Aschenbrenner: TextGrid: eScholarship und der Fortschritt der Wissenschaft durch vernetzte Angebote
    E. Metadaten und Ontologien Thomas Baker: Dublin Core Application Profiles: current approaches Georg Hohmann: Die Anwendung des CIDOC für die semantische Wissensrepräsentation in den Kulturwissenschaften Elena Semenova: Ontologie als Begriffssystem. Theoretische Überlegungen und ihre praktische Umsetzung bei der Entwicklung einer Ontologie der Wissenschaftsdisziplinen F. Repositorien und Ressourcen Christiane Hümmer: TELOTA - Aspekte eines Wissensportals für geisteswissenschaftliche Forschung Philipp Scham: Integration von Open-Access-Repositorien in Fachportale Benjamin Zapilko: Dynamisches Browsing im Kontext von Informationsarchitekturen
  9. Weller, K.: Knowledge representation in the Social Semantic Web (2010) 0.04
    0.040422015 = product of:
      0.06063302 = sum of:
        0.054615162 = weight(_text_:im in 4515) [ClassicSimilarity], result of:
          0.054615162 = score(doc=4515,freq=24.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.37866634 = fieldWeight in 4515, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4515)
        0.0060178554 = product of:
          0.018053565 = sum of:
            0.018053565 = weight(_text_:retrieval in 4515) [ClassicSimilarity], result of:
              0.018053565 = score(doc=4515,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.11697317 = fieldWeight in 4515, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4515)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: iwp 62(2011) H.4, S.205-206 (C. Carstens): "Welche Arten der Wissensrepräsentation existieren im Web, wie ausgeprägt sind semantische Strukturen in diesem Kontext, und wie können soziale Aktivitäten im Sinne des Web 2.0 zur Strukturierung von Wissen im Web beitragen? Diesen Fragen widmet sich Wellers Buch mit dem Titel Knowledge Representation in the Social Semantic Web. Der Begriff Social Semantic Web spielt einerseits auf die semantische Strukturierung von Daten im Sinne des Semantic Web an und deutet andererseits auf die zunehmend kollaborative Inhaltserstellung im Social Web hin. Weller greift die Entwicklungen in diesen beiden Bereichen auf und beleuchtet die Möglichkeiten und Herausforderungen, die aus der Kombination der Aktivitäten im Semantic Web und im Social Web entstehen. Der Fokus des Buches liegt dabei primär auf den konzeptuellen Herausforderungen, die sich in diesem Kontext ergeben. So strebt die originäre Vision des Semantic Web die Annotation aller Webinhalte mit ausdrucksstarken, hochformalisierten Ontologien an. Im Social Web hingegen werden große Mengen an Daten von Nutzern erstellt, die häufig mithilfe von unkontrollierten Tags in Folksonomies annotiert werden. Weller sieht in derartigen kollaborativ erstellten Inhalten und Annotationen großes Potenzial für die semantische Indexierung, eine wichtige Voraussetzung für das Retrieval im Web. Das Hauptinteresse des Buches besteht daher darin, eine Brücke zwischen den Wissensrepräsentations-Methoden im Social Web und im Semantic Web zu schlagen. Um dieser Fragestellung nachzugehen, gliedert sich das Buch in drei Teile. . . .
    Insgesamt besticht das Buch insbesondere durch seine breite Sichtweise, die Aktualität und die Fülle an Referenzen. Es ist somit sowohl als Überblickswerk geeignet, das umfassend über aktuelle Entwicklungen und Trends der Wissensrepräsentation im Semantic und Social Web informiert, als auch als Lektüre für Experten, für die es vor allem als kontextualisierte und sehr aktuelle Sammlung von Referenzen eine wertvolle Ressource darstellt." Weitere Rez. in: Journal of Documentation. 67(2011), no.5, S.896-899 (P. Rafferty)
  10. Sidhom, S.: Numerical training for the information retrieval in medical imaginery : modeling of the Gabor filters (2014) 0.04
    0.040253438 = product of:
      0.12076031 = sum of:
        0.12076031 = sum of:
          0.031153653 = weight(_text_:online in 1453) [ClassicSimilarity], result of:
            0.031153653 = score(doc=1453,freq=2.0), product of:
              0.1548489 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.051022716 = queryNorm
              0.20118743 = fieldWeight in 1453, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.046875 = fieldNorm(doc=1453)
          0.03094897 = weight(_text_:retrieval in 1453) [ClassicSimilarity], result of:
            0.03094897 = score(doc=1453,freq=2.0), product of:
              0.15433937 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.051022716 = queryNorm
              0.20052543 = fieldWeight in 1453, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.046875 = fieldNorm(doc=1453)
          0.058657683 = weight(_text_:22 in 1453) [ClassicSimilarity], result of:
            0.058657683 = score(doc=1453,freq=4.0), product of:
              0.17867287 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051022716 = queryNorm
              0.32829654 = fieldWeight in 1453, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1453)
      0.33333334 = coord(1/3)
    
    Abstract
    We propose, in this work, a method of medical image indexing and research by exploiting their own digital component. We represent the image digital component by a vector of characteristics what we will call: numerical signature of the image. Using the Gabor wavelets, each image of the training medical base is indexed and represented by its characteristics (texture). We thus will build (in offline) a numerical data base of signature. This enables us (in online) to carry out a numerical search for similarity compared to a request image. In order to evaluate the performances we tested our application on a training mammography images basis. The results obtained show well that the representation of the digital component of the images proves to be significant as regards search for information in imagery.
    Date
    5. 9.2014 18:22:35
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  11. Bhatia, S.; Biyani, P.; Mitra, P.: Identifying the role of individual user messages in an online discussion and its use in thread retrieval (2016) 0.04
    0.0386501 = product of:
      0.115950294 = sum of:
        0.115950294 = sum of:
          0.036714934 = weight(_text_:online in 2650) [ClassicSimilarity], result of:
            0.036714934 = score(doc=2650,freq=4.0), product of:
              0.1548489 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.051022716 = queryNorm
              0.23710167 = fieldWeight in 2650, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2650)
          0.04467099 = weight(_text_:retrieval in 2650) [ClassicSimilarity], result of:
            0.04467099 = score(doc=2650,freq=6.0), product of:
              0.15433937 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.051022716 = queryNorm
              0.28943354 = fieldWeight in 2650, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2650)
          0.03456437 = weight(_text_:22 in 2650) [ClassicSimilarity], result of:
            0.03456437 = score(doc=2650,freq=2.0), product of:
              0.17867287 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051022716 = queryNorm
              0.19345059 = fieldWeight in 2650, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2650)
      0.33333334 = coord(1/3)
    
    Abstract
    Online discussion forums have become a popular medium for users to discuss with and seek information from other users having similar interests. A typical discussion thread consists of a sequence of posts posted by multiple users. Each post in a thread serves a different purpose providing different types of information and, thus, may not be equally useful for all applications. Identifying the purpose and nature of each post in a discussion thread is thus an interesting research problem as it can help in improving information extraction and intelligent assistance techniques. We study the problem of classifying a given post as per its purpose in the discussion thread and employ features based on the post's content, structure of the thread, behavior of the participating users, and sentiment analysis of the post's content. We evaluate our approach on two forum data sets belonging to different genres and achieve strong classification performance. We also analyze the relative importance of different features used for the post classification task. Next, as a use case, we describe how the post class information can help in thread retrieval by incorporating this information in a state-of-the-art thread retrieval model.
    Date
    22. 1.2016 11:50:46
  12. Svensson, L.G.; Jahns, Y.: PDF, CSV, RSS and other Acronyms : redefining the bibliographic services in the German National Library (2010) 0.04
    0.03801625 = product of:
      0.05702437 = sum of:
        0.022522911 = weight(_text_:im in 3970) [ClassicSimilarity], result of:
          0.022522911 = score(doc=3970,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.15615936 = fieldWeight in 3970, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3970)
        0.03450146 = product of:
          0.051752187 = sum of:
            0.025961377 = weight(_text_:online in 3970) [ClassicSimilarity], result of:
              0.025961377 = score(doc=3970,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 3970, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3970)
            0.025790809 = weight(_text_:retrieval in 3970) [ClassicSimilarity], result of:
              0.025790809 = score(doc=3970,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 3970, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3970)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Abstract
    In January 2010, the German National Library discontinued the print version of the national bibliography and replaced it with an online journal. This was the first step in a longer process of redefining the National Library's bibliographic services, leaving the field of traditional media - e. g. paper or CD-ROM databases - and focusing on publishing its data over the WWW. A new business model was set up - all web resources are now published in an extra bibliography series and the bibliographic data are freely available. Step by step the prices of the other bibliographic data will be also reduced. In the second stage of the project, the focus is on value-added services based on the National Library's catalogue. The main purpose is to introduce alerting services based on the user's search criteria offering different access methods such as RSS feeds, integration with e. g. Zotero, or export of the bibliographic data as a CSV or PDF file. Current standards of cataloguing remain a guide line to offer high-value end-user retrieval but they will be supplemented by automated indexing procedures to find & browse the growing number of documents. A transparent cataloguing policy and wellarranged selection menus are aimed.
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 91. Bibliography.
  13. Mixter, J.; Childress, E.R.: FAST (Faceted Application of Subject Terminology) users : summary and case studies (2013) 0.04
    0.03801625 = product of:
      0.05702437 = sum of:
        0.022522911 = weight(_text_:im in 2011) [ClassicSimilarity], result of:
          0.022522911 = score(doc=2011,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.15615936 = fieldWeight in 2011, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2011)
        0.03450146 = product of:
          0.051752187 = sum of:
            0.025961377 = weight(_text_:online in 2011) [ClassicSimilarity], result of:
              0.025961377 = score(doc=2011,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 2011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2011)
            0.025790809 = weight(_text_:retrieval in 2011) [ClassicSimilarity], result of:
              0.025790809 = score(doc=2011,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 2011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2011)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Theme
    Verbale Doksprachen im Online-Retrieval
  14. Ostermann, F.O.; Tomko, M.; Purves, R.: User evaluation of automatically generated keywords and toponyms for geo-referenced images (2013) 0.04
    0.035917673 = product of:
      0.107753016 = sum of:
        0.107753016 = sum of:
          0.036714934 = weight(_text_:online in 663) [ClassicSimilarity], result of:
            0.036714934 = score(doc=663,freq=4.0), product of:
              0.1548489 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.051022716 = queryNorm
              0.23710167 = fieldWeight in 663, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.0390625 = fieldNorm(doc=663)
          0.03647371 = weight(_text_:retrieval in 663) [ClassicSimilarity], result of:
            0.03647371 = score(doc=663,freq=4.0), product of:
              0.15433937 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.051022716 = queryNorm
              0.23632148 = fieldWeight in 663, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=663)
          0.03456437 = weight(_text_:22 in 663) [ClassicSimilarity], result of:
            0.03456437 = score(doc=663,freq=2.0), product of:
              0.17867287 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051022716 = queryNorm
              0.19345059 = fieldWeight in 663, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=663)
      0.33333334 = coord(1/3)
    
    Abstract
    This article presents the results of a user evaluation of automatically generated concept keywords and place names (toponyms) for geo-referenced images. Automatically annotating images is becoming indispensable for effective information retrieval, since the number of geo-referenced images available online is growing, yet many images are insufficiently tagged or captioned to be efficiently searchable by standard information retrieval procedures. The Tripod project developed original methods for automatically annotating geo-referenced images by generating representations of the likely visible footprint of a geo-referenced image, and using this footprint to query spatial databases and web resources. These queries return raw lists of potential keywords and toponyms, which are subsequently filtered and ranked. This article reports on user experiments designed to evaluate the quality of the generated annotations. The experiments combined quantitative and qualitative approaches: To retrieve a large number of responses, participants rated the annotations in standardized online questionnaires that showed an image and its corresponding keywords. In addition, several focus groups provided rich qualitative information in open discussions. The results of the evaluation show that currently the annotation method performs better on rural images than on urban ones. Further, for each image at least one suitable keyword could be generated. The integration of heterogeneous data sources resulted in some images having a high level of noise in the form of obviously wrong or spurious keywords. The article discusses the evaluation itself and methods to improve the automatic generation of annotations.
    Date
    22. 3.2013 19:32:18
  15. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.04
    0.035215717 = product of:
      0.10564715 = sum of:
        0.10564715 = product of:
          0.15847072 = sum of:
            0.08934198 = weight(_text_:retrieval in 5865) [ClassicSimilarity], result of:
              0.08934198 = score(doc=5865,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5788671 = fieldWeight in 5865, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
            0.06912874 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.06912874 = score(doc=5865,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  16. Paradigms and conceptual systems in knowledge organization : Proceedings of the Eleventh International ISKO Conference, 23-26 February 2010 Rome, Italy (2010) 0.03
    0.034526624 = product of:
      0.103579864 = sum of:
        0.103579864 = sum of:
          0.031153653 = weight(_text_:online in 773) [ClassicSimilarity], result of:
            0.031153653 = score(doc=773,freq=2.0), product of:
              0.1548489 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.051022716 = queryNorm
              0.20118743 = fieldWeight in 773, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.046875 = fieldNorm(doc=773)
          0.03094897 = weight(_text_:retrieval in 773) [ClassicSimilarity], result of:
            0.03094897 = score(doc=773,freq=2.0), product of:
              0.15433937 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.051022716 = queryNorm
              0.20052543 = fieldWeight in 773, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.046875 = fieldNorm(doc=773)
          0.04147724 = weight(_text_:22 in 773) [ClassicSimilarity], result of:
            0.04147724 = score(doc=773,freq=2.0), product of:
              0.17867287 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051022716 = queryNorm
              0.23214069 = fieldWeight in 773, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=773)
      0.33333334 = coord(1/3)
    
    Content
    Inhalt: Keynote address - Order and KO - Conceptology in KO - Mathematics in KO - Psychology and KO - Science and KO - Problems in KO - KOS general questions - KOS structure and elements, facet analysis - KOS construction - KOS Maintenance, updating and storage - Compatibility, concordance, interoperability between indexing languages - Theory of classing and indexing - Taxonomies in communications engineering - Special KOSs in literature - Special KOSs in cultural sciences - General problems of natural language, derived indexing, tagging - Automatic language processing - Online retrieval systems and technologies - Problems of terminology - Subject-oriented terminology work - General problems of applied classing and indexing, catalogues, guidelines - Classing and indexing of non-book materials (images, archives, museums) - Personas and institutions in KO, cultural warrant - Organizing team - List of contributors
    Date
    22. 2.2013 12:09:34
  17. Hall, J.L.; Bawden, D.: Online retrieval history : how it all began (2011) 0.03
    0.033608932 = product of:
      0.10082679 = sum of:
        0.10082679 = product of:
          0.15124018 = sum of:
            0.09288225 = weight(_text_:online in 4539) [ClassicSimilarity], result of:
              0.09288225 = score(doc=4539,freq=10.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5998251 = fieldWeight in 4539, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4539)
            0.058357935 = weight(_text_:retrieval in 4539) [ClassicSimilarity], result of:
              0.058357935 = score(doc=4539,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.37811437 = fieldWeight in 4539, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4539)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This paper aims to discuss the history of online searching through the views of one of its pioneers. Design/methodology/approach - The paper presents, and comments on, the recollections of Jim Hall, one of the earliest UK-based operators of, and writers on, online retrieval systems. Findings - The paper gives an account of the development of online searching in the UK during the 1960s and 1970s. Originality/value - The paper presents the perspective of one of the pioneers of online searching.
  18. Ménard, E.; Mas, S.; Alberts, I.: Faceted classification for museum artefacts : a methodology to support web site development of large cultural organizations (2010) 0.03
    0.030412998 = product of:
      0.045619495 = sum of:
        0.01801833 = weight(_text_:im in 3945) [ClassicSimilarity], result of:
          0.01801833 = score(doc=3945,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.12492748 = fieldWeight in 3945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.03125 = fieldNorm(doc=3945)
        0.027601166 = product of:
          0.041401748 = sum of:
            0.0207691 = weight(_text_:online in 3945) [ClassicSimilarity], result of:
              0.0207691 = score(doc=3945,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13412495 = fieldWeight in 3945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3945)
            0.020632647 = weight(_text_:retrieval in 3945) [ClassicSimilarity], result of:
              0.020632647 = score(doc=3945,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13368362 = fieldWeight in 3945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3945)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  19. Chowdhury, G.G.: Introduction to modern information retrieval (2010) 0.03
    0.029975843 = product of:
      0.044963762 = sum of:
        0.031208664 = weight(_text_:im in 4903) [ClassicSimilarity], result of:
          0.031208664 = score(doc=4903,freq=6.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.21638076 = fieldWeight in 4903, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.03125 = fieldNorm(doc=4903)
        0.013755098 = product of:
          0.041265294 = sum of:
            0.041265294 = weight(_text_:retrieval in 4903) [ClassicSimilarity], result of:
              0.041265294 = score(doc=4903,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 4903, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4903)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: iwp 62(2011) H.8, S.398-400 (A.R. Brellochs): " ... An der faktisch gegebenen Positionierung als Textbuch für Information Retrieval, einigen Themenfeldern der Informationsvermittlung und des Bibliothekswesens ist zwar grundsätzlich nichts auszusetzen. Leider führt aber die Vielzahl der behandelten Themen dazu, dass trotz des Umfangs von gut 500 Seiten manche für das IR wichtige Themen nur sehr knapp abgehandelt wurden. Diese thematische Überbreite macht den Band leider als generelle Einführung für Leser ohne fachlichen Hintergrund in Informations- oder Bibliothekswissenschaft ungeeignet, denn für diese ist ein großer Teil des Buches nicht ausreichend verständlich.
    In die Irre führt den unbedarften Leser auf jeden Fall der im Titel postulierte Anspruch, eine Einführung in Information Retrieval zu leisten. Gegenüber dem Leser wäre es ehrlicher, den Titel des Buches entsprechend thematisch zu erweitern, oder aber sich tatsächlich auf das angegebene Gebiet zu konzentrieren. Eine solche Verschlankung um nicht eigentlich für das Verständnis der IR-Konzepte notwendige Materialien täte dem Titel sicher sehr gut und würde auch dessen weitere Verbreitung außerhalb des Informationswesens ermöglichen. Die für die aktuelle Auflage durchgeführte Aktualisierung und Erweiterung bleibt leider an verschiedenen Stellen etwas an der Oberfläche. Man muss deshalb konstatieren, dass die Stärke des Bandes eher in der thematischen Breite liegt, als darin, wirklich einen erschöpfenden Einblick in das Information Retrieval zu geben. Für eine grundlegende Einführung in IR lässt Chowdhury zwar inhaltlich nur wenig vermissen, doch die Kohärenz der Darstellung und die didaktische Aufbereitung des Stoffes sind auf jeden Fall ausbaufähig, um den Ansprüchen an ein Lehrbuch gerecht werden zu können, das auch im Selbststudium durchgearbeitet werden kann. Diese Schwäche ist allerdings auch der Informationswissenschaft selbst anzulasten, die (im Gegensatz etwa zur Informatik) bisher keine allgemein anerkannte Fachdidaktik hervor gebracht hat. Trotz der besprochenen Desiderate ist der Titel bereits jetzt eine empfehlenswerte Ergänzung zu Vorlesungen in den behandelten Gebieten, wenn man einen fachlichen Hintergrund in Bibliotheksoder Informationswissenschaft voraussetzt."
  20. Sandner, M.: NSW online : Elektronisches Tool zur "Liste der fachlichen Nachschlagewerke" (2010) 0.03
    0.02939368 = product of:
      0.04409052 = sum of:
        0.031852208 = weight(_text_:im in 4527) [ClassicSimilarity], result of:
          0.031852208 = score(doc=4527,freq=4.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.22084267 = fieldWeight in 4527, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4527)
        0.012238312 = product of:
          0.036714934 = sum of:
            0.036714934 = weight(_text_:online in 4527) [ClassicSimilarity], result of:
              0.036714934 = score(doc=4527,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23710167 = fieldWeight in 4527, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4527)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Die "Liste der fachlichen Nachschlagewerke zu den Normdateien" (NSW-Liste) stellt mit ihren derzeit rund 1.660 Einträgen ein verbindliches Arbeitsinstrument für die tägliche Praxis in der kooperativen Normdatenpflege des deutsch-sprachigen Raumes, speziell für die Terminologiearbeit in der bibliothekarischen Sacherschließung dar. In jedem Normdatensatz der Schlagwortnormdatei (SWD) werden für den Nachweis und die Begründung der Ansetzungs- und Verweisungsformen eines Deskriptors im Feld "Quelle" Referenzwerke aus der so genannten Prioritätenliste (Rangfolge der Nachschlagewerke), darüber hinaus aus der gesamten NSW-Liste, festgehalten und normiert abgekürzt. In gedruckter Form erscheint sie jährlich aktuali-siert mit einem Änderungsdienst (Änderungen, Neuauflagen; Neuaufnahmen) und steht seit eini-gen Jahren auch elektronisch abrufbar bereit. Dennoch ist diese Liste "in die Jahre" ge-kommen. Vor allem die Durchnummerierung ihrer Einträge ist störend: In jeder neuen Auflage ändern sich die laufenden Nummern, während sich gleichzeitig die meisten Register gerade auf diese Zählung beziehen. - Das einzig gleichbleibende Merkmal jedes aufgelisteten Nachschlagewerks ist seine normierte Abkürzung. Deshalb haben wir uns im neuen elektronischen NSW-Tool für diese Abkürzungen als Anker entschieden. Die Entstehung dieses Tools resultiert aus einer Verkettung günstiger Umstände und hatte so gut wie keine finanzielle Basis. Es beruht auf einem starken Engagement aller Beteiligten. Aus die-sem Grund freuen wir uns ganz besonders über das erreichte Ergebnis und wagen uns nun mit einer Beta-Version an die Fachöffentlichkeit.
    Object
    NSW online

Types

  • a 1344
  • m 114
  • el 94
  • s 42
  • x 16
  • n 7
  • b 5
  • i 3
  • r 1
  • More… Less…

Themes

Subjects

Classifications