Search (599 results, page 2 of 30)

  • × language_ss:"e"
  • × type_ss:"el"
  1. Jing, Y.; Croft, W.B.: ¬An association thesaurus for information retrieval (199?) 0.01
    0.014939481 = product of:
      0.037348703 = sum of:
        0.019449355 = weight(_text_:information in 4494) [ClassicSimilarity], result of:
          0.019449355 = score(doc=4494,freq=6.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.23515764 = fieldWeight in 4494, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4494)
        0.017899347 = weight(_text_:und in 4494) [ClassicSimilarity], result of:
          0.017899347 = score(doc=4494,freq=2.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.17141339 = fieldWeight in 4494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4494)
      0.4 = coord(2/5)
    
    Abstract
    Although commonly used in both commercial and experimental information retrieval systems, thesauri have not demonstrated consistent benefits for retrieval performance, and it is difficult to construct a thesaurus automatically for large text databases. In this paper, an approach, called PhraseFinder, is proposed to construct collection-dependent association thesauri automatically using large full-text document collections. The association thesaurus can be accessed through natural language queries in INQUERY, an information retrieval system based on the probabilistic inference network. Experiments are conducted in INQUERY to evaluate different types of association thesauri, and thesauri constructed for a variety of collections
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Internet Privacy : eine multidisziplinäre Bestandsaufnahme / a multidisciplinary analysis: acatech STUDIE (2012) 0.01
    0.014840488 = product of:
      0.03710122 = sum of:
        0.006416623 = weight(_text_:information in 3383) [ClassicSimilarity], result of:
          0.006416623 = score(doc=3383,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.0775819 = fieldWeight in 3383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3383)
        0.030684596 = weight(_text_:und in 3383) [ClassicSimilarity], result of:
          0.030684596 = score(doc=3383,freq=18.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.29385152 = fieldWeight in 3383, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3383)
      0.4 = coord(2/5)
    
    Abstract
    Aufgrund der so großen Bedeutung von Privatheit im Internet hat acatech, die Deutsche Akademie der Technikwissenschaften, 2011 ein Projekt initiiert, das sich mit dem Privatheitsparadoxon wissenschaftlich auseinandersetzt. In dem Projekt werden Empfehlungen entwickelt, wie sich eine Kultur der Privatheit und des Vertrauens im Internet etablieren lässt, die es ermöglicht, das Paradoxon aufzulösen. Wir verwenden hier den Begriff der Privatheit. Er deutet an, dass hier nicht nur der räumliche Begriff Privatsphäre gemeint ist, sondern auch das im europäischen Kontext wichtige Konzept der informationellen Selbstbestimmung einbezogen ist. Dieser Band legt die Ergebnisse der ersten Projektphase vor: eine Bestandsaufnahme von Privatheit im Internet aus verschiedenen Blickwinkeln. Kapitel 1 stellt die Wünsche und Befürchtungen von Internetnutzern und Gesellschaft im Hinblick auf ihre Privatheit vor. Sie wurden mithilfe sozialwissenschaftlicher Methoden untersucht. Ergänzend dazu untersucht das zweite Kapitel Privatheit im Cyberspace aus ethischer Perspektive. Das dritte Kapitel widmet sich ökonomischen Aspekten: Da viele Onlinedienstleistungen mit Nutzerdaten bezahlt werden, ergibt sich die Frage, was dies sowohl für den Nutzer und Kunden als auch für die Unternehmen bedeutet. Kapitel 4 hat einen technologischen Fokus und analysiert, wie Privatheit von Internettechnologien bedroht wird und welche technischen Möglichkeiten es gibt, um die Privatheit des Nutzers zu schützen. Selbstverständlich ist der Schutz von Privatheit im Internet nicht nur ein technisches Problem. Deshalb untersucht Kapitel 5 Privatheit aus rechtlicher Sicht. Bei der Lektüre der fünf Kapitel wird dem Leser sofort die Komplexität der Frage von Privatheit im Internet (Internet Privacy) bewusst. Daraus folgt die unbedingte Notwendigkeit eines interdisziplinären Ansatzes. In diesem Sinne wird die interdisziplinäre Projektgruppe gemeinsam Optionen und Empfehlungen für einen Umgang mit Privatheit im Internet entwickeln, die eine Kultur der Privatheit und des Vertrauens im Internet fördern. Diese Optionen und Empfehlungen werden 2013 als zweiter Band dieser Studie veröffentlicht.
    Theme
    Information
  3. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.014328319 = product of:
      0.035820797 = sum of:
        0.016670875 = weight(_text_:information in 1289) [ClassicSimilarity], result of:
          0.016670875 = score(doc=1289,freq=6.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.20156369 = fieldWeight in 1289, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1289)
        0.019149924 = product of:
          0.038299847 = sum of:
            0.038299847 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.038299847 = score(doc=1289,freq=2.0), product of:
                0.1649855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047114085 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    QVIZ will research and create a framework for visualizing and querying archival resources by a time-space interface based on maps and emergent knowledge structures. The framework will also integrate social software, such as wikis, in order to utilize knowledge in existing and new communities of practice. QVIZ will lead to improved information sharing and knowledge creation, easier access to information in a user-adapted context and innovative ways of exploring and visualizing materials over time, between countries and other administrative units. The common European framework for sharing and accessing archival information provided by the QVIZ project will open a considerably larger commercial market based on archival materials as well as a richer understanding of European history.
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  4. Patriarca, S.: Information literacy gives us the tools to check sources and to verify factual statements : What does Popper`s "Es gibt keine Autoritäten" mean? (2021) 0.01
    0.013649051 = product of:
      0.034122627 = sum of:
        0.016041556 = weight(_text_:information in 331) [ClassicSimilarity], result of:
          0.016041556 = score(doc=331,freq=8.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.19395474 = fieldWeight in 331, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
        0.01808107 = weight(_text_:und in 331) [ClassicSimilarity], result of:
          0.01808107 = score(doc=331,freq=4.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.17315367 = fieldWeight in 331, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=331)
      0.4 = coord(2/5)
    
    Abstract
    I wonder if you would consider an English perspective on the exchange between Bernd Jörs and Hermann Huemer. In my career in the independent education sector I can recall many discussions and Government reports about cross-curricular issues such as logical reasoning and critical thinking, In the IB system this led to the inclusion in the Diploma of "Theory of Knowledge." In the UK we had "key skills" and "critical thinking." One such key skill is what we now call "information literacy." "In his parody of Information literacy, Dr Jörs seems to have confused a necessary condition for a sufficient condition. The fact that information competence may be necessary for serious academic study does not of course make it sufficient. When that is understood the joke about the megalomaniac rather loses its force. (We had better pass over the rant which follows, the sneer at "earth sciences" and the German prejudice towards Austrians)."
    Content
    Zu: Bernd Jörs, Zukunft der Informationswissenschaft und Kritischer Rationalismus - Gegen die Selbstüberschätzung der Vertreter der "Informationskompetenz" eine Rückkehr zu Karl R. Popper geboten, in: Open Password, 30 August - Herbert Huemer, Informationskompetenz als Kompetenz für lebenslanges Lernen, in: Open Password, #965, 25. August 2021 - Huemer nahm auf den Beitrag von Bernd Jörs "Wie sich "Informationskompetenz" methodisch-operativ untersuchen lässt" in Open Password am 20. August 2021 Bezug.
    Footnote
    Vgl. die Erwiderung: Jörs, B.: Informationskompetenz ist auf domänenspezifisches Vorwissen angewiesen und kann immer nur vorläufig sein: eine Antwort auf Steve Patriarca. Unter: Open Password. 2021, Nr.998 vom 15. November 2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzM3NiwiYTRlYWIxNTJhOTU4IiwwLDAsMzM5LDFd].
  5. Thesaurus software (2001) 0.01
    0.013511873 = product of:
      0.03377968 = sum of:
        0.015880331 = weight(_text_:information in 6773) [ClassicSimilarity], result of:
          0.015880331 = score(doc=6773,freq=4.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.1920054 = fieldWeight in 6773, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6773)
        0.017899347 = weight(_text_:und in 6773) [ClassicSimilarity], result of:
          0.017899347 = score(doc=6773,freq=2.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.17141339 = fieldWeight in 6773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6773)
      0.4 = coord(2/5)
    
    Abstract
    Members offer comments and suggest resources on programs for creating, maintaining, and publishing thesauri. Formerly a tool for writers and indexers, the thesaurus has taken on a new role as an essential component of the corporate information infrastructure. Many people are using word processor or database programs to create and maintain thesauri, while others are using specialized tools that perform consistency checks and offer special reporting capabilities. Some also use thesaurus modules integrated into another application, such as web publishing, content management, or e-commerce. This article includes material comes from our own experience, email responses from members, and comments from participants in our seminars and roundtables. There's also an introduction to thesauri in a corporate information management system
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  6. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.01
    0.013511873 = product of:
      0.03377968 = sum of:
        0.015880331 = weight(_text_:information in 572) [ClassicSimilarity], result of:
          0.015880331 = score(doc=572,freq=4.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.1920054 = fieldWeight in 572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.017899347 = weight(_text_:und in 572) [ClassicSimilarity], result of:
          0.017899347 = score(doc=572,freq=2.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.17141339 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
      0.4 = coord(2/5)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  7. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.013428268 = product of:
      0.03357067 = sum of:
        0.01122909 = weight(_text_:information in 759) [ClassicSimilarity], result of:
          0.01122909 = score(doc=759,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.13576832 = fieldWeight in 759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.022341577 = product of:
          0.044683155 = sum of:
            0.044683155 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.044683155 = score(doc=759,freq=2.0), product of:
                0.1649855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047114085 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  8. Priss, U.: Faceted knowledge representation (1999) 0.01
    0.013428268 = product of:
      0.03357067 = sum of:
        0.01122909 = weight(_text_:information in 2654) [ClassicSimilarity], result of:
          0.01122909 = score(doc=2654,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.13576832 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.022341577 = product of:
          0.044683155 = sum of:
            0.044683155 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.044683155 = score(doc=2654,freq=2.0), product of:
                0.1649855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047114085 = queryNorm
                0.2708308 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  9. Stapleton, M.; Adams, M.: Faceted categorisation for the corporate desktop : visualisation and interaction using metadata to enhance user experience (2007) 0.01
    0.013104655 = product of:
      0.032761637 = sum of:
        0.0136117125 = weight(_text_:information in 718) [ClassicSimilarity], result of:
          0.0136117125 = score(doc=718,freq=4.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.16457605 = fieldWeight in 718, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=718)
        0.019149924 = product of:
          0.038299847 = sum of:
            0.038299847 = weight(_text_:22 in 718) [ClassicSimilarity], result of:
              0.038299847 = score(doc=718,freq=2.0), product of:
                0.1649855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047114085 = queryNorm
                0.23214069 = fieldWeight in 718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=718)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Mark Stapleton and Matt Adamson began their presentation by describing how Dow Jones' Factiva range of information services processed an average of 170,000 documents every day, drawn from over 10,000 sources in 22 languages. These documents are categorized within five facets: Company, Subject, Industry, Region and Language. The digital feeds received from information providers undergo a series of processing stages, initially to prepare them for automatic categorization and then to format them ready for distribution. The categorization stage is able to handle 98% of documents automatically, the remaining 2% requiring some form of human intervention. Depending on the source, categorization can involve any combination of 'Autocoding', 'Dictionary-based Categorizing', 'Rules-based Coding' or 'Manual Coding'
  10. SimTown : baue deine eigene Stadt (1995) 0.01
    0.012526934 = product of:
      0.06263467 = sum of:
        0.06263467 = weight(_text_:und in 5478) [ClassicSimilarity], result of:
          0.06263467 = score(doc=5478,freq=12.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.5998219 = fieldWeight in 5478, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=5478)
      0.2 = coord(1/5)
    
    Abstract
    SimTown wurde entwickelt, um Kindern die wichtigsten Konzepte der Wirtschaft (Angebot und Nachfrage), Ökologie (Rohstoffe, Umweltverschmutzung und Recycling) und Städteplanung (Gleichgewicht zwischen Wohnraum, Arbeitsplätzen und Erholungsstätten) auf einfache und unterhaltsame Art nahezubringen
    Issue
    PC CD-ROM Windows. 8 Jahre und älter.
  11. Atzbach, R.: ¬Der Rechtschreibtrainer : Rechtschreibübungen und -spiele für die 5. bis 9. Klasse (1996) 0.01
    0.012401032 = product of:
      0.06200516 = sum of:
        0.06200516 = weight(_text_:und in 5579) [ClassicSimilarity], result of:
          0.06200516 = score(doc=5579,freq=6.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.5937934 = fieldWeight in 5579, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=5579)
      0.2 = coord(1/5)
    
    Abstract
    Alte und neue Rechtschreibregeln
    Issue
    MS-DOS und Windows.
  12. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.01
    0.011897362 = product of:
      0.029743403 = sum of:
        0.016976789 = weight(_text_:information in 1163) [ClassicSimilarity], result of:
          0.016976789 = score(doc=1163,freq=14.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.20526241 = fieldWeight in 1163, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1163)
        0.0127666155 = product of:
          0.025533231 = sum of:
            0.025533231 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
              0.025533231 = score(doc=1163,freq=2.0), product of:
                0.1649855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047114085 = queryNorm
                0.15476047 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper addresses the problem of information discovery in large collections of text. For users, one of the key problems in working with such collections is determining where to focus their attention. In selecting documents for examination, users must be able to formulate reasonably precise queries. Queries that are too broad will greatly reduce the efficiency of information discovery efforts by overwhelming the users with peripheral information. In order to formulate efficient queries, a mechanism is needed to automatically alert users regarding potentially interesting information contained within the collection. This paper presents the results of an experiment designed to test one approach to generation of such alerts. The technique of latent semantic indexing (LSI) is used to identify relationships among entities of interest. Entity extraction software is used to pre-process the text of the collection so that the LSI space contains representation vectors for named entities in addition to those for individual terms. In the LSI space, the cosine of the angle between the representation vectors for two entities captures important information regarding the degree of association of those two entities. For appropriate choices of entities, determining the entity pairs with the highest mutual cosine values yields valuable information regarding the contents of the text collection. The test database used for the experiment consists of 150,000 news articles. The proposed approach for alert generation is tested using a counterterrorism analysis example. The approach is shown to have significant potential for aiding users in rapidly focusing on information of potential importance in large text collections. The approach also has value in identifying possible use of aliases.
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  13. RDA Toolkit (4) : Dezember 2017 (2017) 0.01
    0.011715028 = product of:
      0.029287571 = sum of:
        0.006416623 = weight(_text_:information in 4283) [ClassicSimilarity], result of:
          0.006416623 = score(doc=4283,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.0775819 = fieldWeight in 4283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4283)
        0.022870949 = weight(_text_:und in 4283) [ClassicSimilarity], result of:
          0.022870949 = score(doc=4283,freq=10.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.219024 = fieldWeight in 4283, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4283)
      0.4 = coord(2/5)
    
    Abstract
    Am 12. Dezember 2017 ist das neue Release des RDA Toolkits erschienen. Dabei gab es, aufgrund des 3R-Projekts (RDA Toolkit Restruction and Redesign Project), keine inhaltlichen Änderungen am RDA-Text. Es wurden ausschließlich die Übersetzungen in finnischer und französischer Sprache, ebenso wie die dazugehörigen Policy statements, aktualisiert. Für den deutschsprachigen Raum wurden in der Übersetzung zwei Beziehungskennzeichnungen geändert: Im Anhang I.2.2 wurde die Änderung von "Sponsor" zu "Träger" wieder rückgängig gemacht. In Anhang K.2.3 wurde "Sponsor" zu "Person als Sponsor" geändert. Außerdem wurde die Übersetzung der Anwendungsrichtlinien (D-A-CH AWR) ins Französische aktualisiert. Dies ist das vorletzte Release vor dem Rollout des neuen Toolkits. Das letzte Release im Januar/Februar 2018 wird die norwegische Übersetzung enthalten. Im Juni 2018 wird das RDA Toolkit ein Relaunch erfahren und mit einer neuen Oberfläche erscheinen. Dieser beinhaltet ein Redesign der Toolkit-Oberfläche und die inhaltliche Anpassung des Standards RDA an das Library Reference Model (IFLA LRM) sowie die künftige stärkere Ausrichtung auf die aktuellen technischen Möglichkeiten. Zunächst wird im Juni 2018 die englische Originalausgabe der RDA in der neuen Form erscheinen. Alle Übersetzungen werden in einer Übergangszeit angepasst. Hierfür wird die alte Version des RDA Toolkit für ein weiteres Jahr zur Verfügung gestellt. Der Stand Dezember 2017 der deutschen Ausgabe und die D-A-CH-Anwendungsrichtlinien bleiben bis zur Anpassung eingefroren. Nähere Information zum Rollout finden Sie unter dem folgenden Link<http://www.rdatoolkit.org/3Rproject/SR3>. [Inetbib vom 13.12.2017]
    "das RDA Steering Committee (RSC) hat eine Verlautbarung<http://www.rda-rsc.org/sites/all/files/RSC-Chair-19.pdf> zum 3R Project und dem Release des neuen RDA Toolkits am 13. Juni 2018 herausgegeben. Außerdem wurde ein neuer Post zum Projekt auf dem RDA Toolkit Blog veröffentlicht "What to Expect from the RDA Toolkit beta site"<http://www.rdatoolkit.org/3Rproject/Beta>. Die deutsche Übersetzung folgt in Kürze auf dem RDA-Info-Wiki<https://wiki.dnb.de/display/RDAINFO/RDA-Info>. Für den deutschsprachigen Raum wird das Thema im Rahmen des Deutschen Bibliothekartags in Berlin im Treffpunkt Standardisierung am Freitag, den 15. Juni aufgegriffen. Die durch das 3R Project entstandenen Anpassungsarbeiten für den DACH-Raum werden im Rahmen eines 3R-DACH-Projekts<https://wiki.dnb.de/x/v5jpBw> in den Fachgruppen des Standardisierungsausschusses durchgeführt. Für die praktische Arbeit ändert sich bis zur Durchführung von Anpassungsschulungen nichts. Basis für die Erschließung bleibt bis dahin die aktuelle Version des RDA Toolkits in deutscher Sprache." [Mail R. Behrens an Inetbib vom 11.06.2018].
  14. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.01
    0.011651375 = product of:
      0.029128438 = sum of:
        0.01122909 = weight(_text_:information in 604) [ClassicSimilarity], result of:
          0.01122909 = score(doc=604,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.13576832 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
        0.017899347 = weight(_text_:und in 604) [ClassicSimilarity], result of:
          0.017899347 = score(doc=604,freq=2.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.17141339 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
      0.4 = coord(2/5)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  15. Smith, A.G.: Search features of digital libraries (2000) 0.01
    0.011581604 = product of:
      0.02895401 = sum of:
        0.0136117125 = weight(_text_:information in 940) [ClassicSimilarity], result of:
          0.0136117125 = score(doc=940,freq=4.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.16457605 = fieldWeight in 940, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=940)
        0.015342298 = weight(_text_:und in 940) [ClassicSimilarity], result of:
          0.015342298 = score(doc=940,freq=2.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.14692576 = fieldWeight in 940, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=940)
      0.4 = coord(2/5)
    
    Content
    Enthält eine Zusammenstellung der Werkzeuge und Hilfsmittel des Information Retrieval
    Source
    Information Research. 5(2000) no.3, April 2000
  16. Lee, M.; Baillie, S.; Dell'Oro, J.: TML: a Thesaural Markpup Language (200?) 0.01
    0.011581604 = product of:
      0.02895401 = sum of:
        0.0136117125 = weight(_text_:information in 1622) [ClassicSimilarity], result of:
          0.0136117125 = score(doc=1622,freq=4.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.16457605 = fieldWeight in 1622, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1622)
        0.015342298 = weight(_text_:und in 1622) [ClassicSimilarity], result of:
          0.015342298 = score(doc=1622,freq=2.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.14692576 = fieldWeight in 1622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=1622)
      0.4 = coord(2/5)
    
    Abstract
    Thesauri are used to provide controlled vocabularies for resource classification. Their use can greatly assist document discovery because thesauri man date a consistent shared terminology for describing documents. A particular thesauras classifies documents according to an information community's needs. As a result, there are many different thesaural schemas. This has led to a proliferation of schema-specific thesaural systems. In our research, we exploit schematic regularities to design a generic thesaural ontology and specfiy it as a markup language. The language provides a common representational framework in which to encode the idiosyncrasies of specific thesauri. This approach has several advantages: it offers consistent syntax and semantics in which to express thesauri; it allows general purpose thesaural applications to leverage many thesauri; and it supports a single thesaural user interface by which information communities can consistently organise, score and retrieve electronic documents.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  17. Mult IK media : eine multimediale Präsentation des Fachbereichs Informations- und Kommunikationswesen der Fachhochschule Hannover (1997) 0.01
    0.011571886 = product of:
      0.05785943 = sum of:
        0.05785943 = weight(_text_:und in 7135) [ClassicSimilarity], result of:
          0.05785943 = score(doc=7135,freq=16.0), product of:
            0.10442211 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.047114085 = queryNorm
            0.55409175 = fieldWeight in 7135, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=7135)
      0.2 = coord(1/5)
    
    Abstract
    Diese CD-ROM enthält eine multimediale Präsentation des Fachbereichs Informations- und Kommunikationswesen der FH Hannover, die über folgende Themen informiert: (1) Berufsbild der Informationspezialisten, Einsatzbereiche und Tätigkeiten (2) Geschichte des Fachbereichs, Gründung, Studentenzahlen, etc. (3) Vorstellung der Studiengänge des Fachbereichs unter Berücksichtigung der Berufsbilder, der Zulassungsbedingungen, der Studienorganisationen und der Praktikumsstellen (4) Ausstattung und Kapazitäten des Fachbereichs (5) Ausgewählte Diplom- und Projektarbeiten (6) Aktivitäten des Fachbereichs in Kooperation mit Partnerhochschulen, a.B. Auslandsprogramme und -projekte, Studenten-Summer-Seminare (7) Präsenz des Fachbereichs im WWW des Internet
    Imprint
    Hannover : FH, Fb Informations- und Kommunikationswesen
  18. Definition of the CIDOC Conceptual Reference Model (2003) 0.01
    0.011509943 = product of:
      0.028774858 = sum of:
        0.009624934 = weight(_text_:information in 1652) [ClassicSimilarity], result of:
          0.009624934 = score(doc=1652,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.116372846 = fieldWeight in 1652, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1652)
        0.019149924 = product of:
          0.038299847 = sum of:
            0.038299847 = weight(_text_:22 in 1652) [ClassicSimilarity], result of:
              0.038299847 = score(doc=1652,freq=2.0), product of:
                0.1649855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047114085 = queryNorm
                0.23214069 = fieldWeight in 1652, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1652)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This document is the formal definition of the CIDOC Conceptual Reference Model ("CRM"), a formal ontology intended to facilitate the integration, mediation and interchange of heterogeneous cultural heritage information. The CRM is the culmination of more than a decade of standards development work by the International Committee for Documentation (CIDOC) of the International Council of Museums (ICOM). Work on the CRM itself began in 1996 under the auspices of the ICOM-CIDOC Documentation Standards Working Group. Since 2000, development of the CRM has been officially delegated by ICOM-CIDOC to the CIDOC CRM Special Interest Group, which collaborates with the ISO working group ISO/TC46/SC4/WG9 to bring the CRM to the form and status of an International Standard.
    Date
    6. 8.2010 14:22:28
  19. Goldberga, A.: Synergy towards shared standards for ALM : Latvian scenario (2008) 0.01
    0.011509943 = product of:
      0.028774858 = sum of:
        0.009624934 = weight(_text_:information in 2322) [ClassicSimilarity], result of:
          0.009624934 = score(doc=2322,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.116372846 = fieldWeight in 2322, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2322)
        0.019149924 = product of:
          0.038299847 = sum of:
            0.038299847 = weight(_text_:22 in 2322) [ClassicSimilarity], result of:
              0.038299847 = score(doc=2322,freq=2.0), product of:
                0.1649855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047114085 = queryNorm
                0.23214069 = fieldWeight in 2322, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2322)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Beitrag während: World library and information congress: 74th IFLA general conference and council, 10-14 August 2008, Québec, Canada.
    Date
    26.12.2011 13:33:22
  20. Priss, U.: Description logic and faceted knowledge representation (1999) 0.01
    0.011509943 = product of:
      0.028774858 = sum of:
        0.009624934 = weight(_text_:information in 2655) [ClassicSimilarity], result of:
          0.009624934 = score(doc=2655,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.116372846 = fieldWeight in 2655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.019149924 = product of:
          0.038299847 = sum of:
            0.038299847 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.038299847 = score(doc=2655,freq=2.0), product of:
                0.1649855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047114085 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31

Years

Types

  • a 266
  • s 15
  • i 13
  • n 12
  • r 12
  • x 11
  • m 8
  • b 6
  • p 3
  • More… Less…

Themes

Classifications