Search (2126 results, page 3 of 107)

  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Priss, U.: Faceted information representation (2000) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 5095) [ClassicSimilarity], result of:
            0.038116705 = score(doc=5095,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 5095, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5095)
          0.049491543 = weight(_text_:22 in 5095) [ClassicSimilarity], result of:
            0.049491543 = score(doc=5095,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 5095, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5095)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents an abstract formalization of the notion of "facets". Facets are relational structures of units, relations and other facets selected for a certain purpose. Facets can be used to structure large knowledge representation systems into a hierarchical arrangement of consistent and independent subsystems (facets) that facilitate flexibility and combinations of different viewpoints or aspects. This paper describes the basic notions, facet characteristics and construction mechanisms. It then explicates the theory in an example of a faceted information retrieval system (FaIR)
    Date
    22. 1.2016 17:47:06
  2. Chen, H.-H.; Lin, W.-C.; Yang, C.; Lin, W.-H.: Translating-transliterating named entities for multilingual information access (2006) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 1080) [ClassicSimilarity], result of:
            0.038116705 = score(doc=1080,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 1080, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1080)
          0.049491543 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
            0.049491543 = score(doc=1080,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 1080, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1080)
      0.5 = coord(1/2)
    
    Date
    4. 6.2006 19:52:22
    Footnote
    Beitrag einer special topic section on multilingual information systems
  3. Trotman, A.: Searching structured documents (2004) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 2538) [ClassicSimilarity], result of:
            0.038116705 = score(doc=2538,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 2538, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2538)
          0.049491543 = weight(_text_:22 in 2538) [ClassicSimilarity], result of:
            0.049491543 = score(doc=2538,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 2538, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2538)
      0.5 = coord(1/2)
    
    Abstract
    Structured document interchange formats such as XML and SGML are ubiquitous, however, information retrieval systems supporting structured searching are not. Structured searching can result in increased precision. A search for the author "Smith" in an unstructured corpus of documents specializing in iron-working could have a lower precision than a structured search for "Smith as author" in the same corpus. Analysis of XML retrieval languages identifies additional functionality that must be supported including searching at, and broken across multiple nodes in the document tree. A data structure is developed to support structured document searching. Application of this structure to information retrieval is then demonstrated. Document ranking is examined and adapted specifically for structured searching.
    Date
    14. 8.2004 10:39:22
  4. Carvalho, J.R. de; Cordeiro, M.I.; Lopes, A.; Vieira, M.: Meta-information about MARC : an XML framework for validation, explanation and help systems (2004) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 2848) [ClassicSimilarity], result of:
            0.038116705 = score(doc=2848,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 2848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2848)
          0.049491543 = weight(_text_:22 in 2848) [ClassicSimilarity], result of:
            0.049491543 = score(doc=2848,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 2848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2848)
      0.5 = coord(1/2)
    
    Source
    Library hi tech. 22(2004) no.2, S.131-137
  5. Kanaeva, Z.: Ranking: Google und CiteSeer (2005) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 3276) [ClassicSimilarity], result of:
            0.038116705 = score(doc=3276,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 3276, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3276)
          0.049491543 = weight(_text_:22 in 3276) [ClassicSimilarity], result of:
            0.049491543 = score(doc=3276,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 3276, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3276)
      0.5 = coord(1/2)
    
    Abstract
    Im Rahmen des klassischen Information Retrieval wurden verschiedene Verfahren für das Ranking sowie die Suche in einer homogenen strukturlosen Dokumentenmenge entwickelt. Die Erfolge der Suchmaschine Google haben gezeigt dass die Suche in einer zwar inhomogenen aber zusammenhängenden Dokumentenmenge wie dem Internet unter Berücksichtigung der Dokumentenverbindungen (Links) sehr effektiv sein kann. Unter den von der Suchmaschine Google realisierten Konzepten ist ein Verfahren zum Ranking von Suchergebnissen (PageRank), das in diesem Artikel kurz erklärt wird. Darüber hinaus wird auf die Konzepte eines Systems namens CiteSeer eingegangen, welches automatisch bibliographische Angaben indexiert (engl. Autonomous Citation Indexing, ACI). Letzteres erzeugt aus einer Menge von nicht vernetzten wissenschaftlichen Dokumenten eine zusammenhängende Dokumentenmenge und ermöglicht den Einsatz von Banking-Verfahren, die auf den von Google genutzten Verfahren basieren.
    Date
    20. 3.2005 16:23:22
  6. Jones, M.; Buchanan, G.; Cheng, T.-C.; Jain, P.: Changing the pace of search : supporting background information seeking (2006) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 5287) [ClassicSimilarity], result of:
            0.038116705 = score(doc=5287,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 5287, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5287)
          0.049491543 = weight(_text_:22 in 5287) [ClassicSimilarity], result of:
            0.049491543 = score(doc=5287,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 5287, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5287)
      0.5 = coord(1/2)
    
    Abstract
    Almost all Web searches are carried out while the user is sitting at a conventional desktop computer connected to the Internet. Although online, handheld, mobile search offers new possibilities, the fast-paced, focused style of interaction may not be appropriate for all user search needs. The authors explore an alternative, relaxed style for Web searching that asynchronously combines an offline handheld computer and an online desktop personal computer. They discuss the role and utility of such an approach, present a tool to meet these user needs, and discuss its relation to other systems.
    Date
    22. 7.2006 18:37:49
  7. Hickey, T.B.; Toves, J.; O'Neill, E.T.: NACO normalization : a detailed examination of the authority file comparison rules (2006) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 5760) [ClassicSimilarity], result of:
            0.038116705 = score(doc=5760,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 5760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5760)
          0.049491543 = weight(_text_:22 in 5760) [ClassicSimilarity], result of:
            0.049491543 = score(doc=5760,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 5760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5760)
      0.5 = coord(1/2)
    
    Abstract
    Normalization rules are essential for interoperability between bibliographic systems. In the process of working with Name Authority Cooperative Program (NACO) authority files to match records with Functional Requirements for Bibliographic Records (FRBR) and developing the Faceted Application of Subject Terminology (FAST) subject heading schema, the authors found inconsistencies in independently created NACO normalization implementations. Investigating these, the authors found ambiguities in the NACO standard that need resolution, and came to conclusions on how the procedure could be simplified with little impact on matching headings. To encourage others to test their software for compliance with the current rules, the authors have established a Web site that has test files and interactive services showing their current implementation.
    Date
    10. 9.2000 17:38:22
  8. Vellucci, S.L.: Metadata and authority control (2000) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 180) [ClassicSimilarity], result of:
            0.038116705 = score(doc=180,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=180)
          0.049491543 = weight(_text_:22 in 180) [ClassicSimilarity], result of:
            0.049491543 = score(doc=180,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=180)
      0.5 = coord(1/2)
    
    Abstract
    A variety of information communities have developed metadata schemes to meet the needs of their own users. The ability of libraries to incorporate and use multiple metadata schemes in current library systems will depend on the compatibility of imported data with existing catalog data. Authority control will play an important role in metadata interoperability. In this article, I discuss factors for successful authority control in current library catalogs, which include operation in a well-defined and bounded universe, application of principles and standard practices to access point creation, reference to authoritative lists, and bibliographic record creation by highly trained individuals. Metadata characteristics and environmental models are examined and the likelihood of successful authority control is explored for a variety of metadata environments.
    Date
    10. 9.2000 17:38:22
  9. Jiang, T.: Architektur und Anwendungen des kollaborativen Lernsystems K3 (2008) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 1391) [ClassicSimilarity], result of:
            0.038116705 = score(doc=1391,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 1391, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1391)
          0.049491543 = weight(_text_:22 in 1391) [ClassicSimilarity], result of:
            0.049491543 = score(doc=1391,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 1391, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1391)
      0.5 = coord(1/2)
    
    Abstract
    Die K3-Architektur zur technischen Entwicklung und Realisierung des netzbasierten Wissensmanagement in der Lehre wird dargestellt. Das aktuelle K3-System besteht aus drei zentralen Komponenten; K3Forum (Diskurs), K3Vis (Visualisierung) und K3Wiki (kollaborative Textproduktion, z. B. für Zusammenfassungen). K3 verwendet Open-Source-Software unter der LGPL Lizenz.. Dadurch können freie Verwendung, überschaubare Entwicklungskosten und Nachhaltigkeit garantiert und die Unabhängigkeit von kommerziellen Software-Anbietern gesichert werden. Dank des komponentenbasierten Entwicklungskonzepts kann K3 flexibel und robust laufend weiterentwickelt werden, ohne die Stabilität der bestehenden Funktionalitäten zu beeinträchtigen. Der Artikel dokumentiert exemplarisch die Hauptkomponenten und Funktionen von K3, so dass nachfolgende Entwickler leicht eine Übersicht über das K3-System gewinnen können. Die Anforderungen an den Transfer des Systems in Umgebungen außerhalb von Konstanz werden beschrieben.
    Date
    10. 2.2008 14:22:00
  10. Beccaria, M.; Scott, D.: Fac-Back-OPAC : an open source interface to your library system (2007) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 2207) [ClassicSimilarity], result of:
            0.038116705 = score(doc=2207,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 2207, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2207)
          0.049491543 = weight(_text_:22 in 2207) [ClassicSimilarity], result of:
            0.049491543 = score(doc=2207,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 2207, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2207)
      0.5 = coord(1/2)
    
    Abstract
    Fac-Back-OPAC is a faceted back­ up OPAC. This advanced catalog offers features that compare favorably with the traditional catalogs for today's library systems. Fac-Back-OPAC represents the convergence of two prominent trends in library tools: the decoupling of discovery tools from the traditional integrated library system and the use of readily available open source components to rapidly produce leading-edge technology for meeting patron and library needs. Built on code that was originally developed by Casey Durfee in February 2007, Fac-Back-OPAC is available for no cost under an open source license to any library that wants to offer an advanced search interface or a backup catalog for its patrons.
    Date
    17. 8.2008 11:22:47
  11. Hearn, S.: Comparing catalogs : currency and consistency of controlled headings (2009) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 3600) [ClassicSimilarity], result of:
            0.038116705 = score(doc=3600,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 3600, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3600)
          0.049491543 = weight(_text_:22 in 3600) [ClassicSimilarity], result of:
            0.049491543 = score(doc=3600,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 3600, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3600)
      0.5 = coord(1/2)
    
    Abstract
    Evaluative and comparative studies of catalog data have tended to focus on methods that are labor intensive, demand expertise, and can examine only a limited number of records. This study explores an alternative approach to gathering and analyzing catalog data, focusing on the currency and consistency of controlled headings. The resulting data provide insight into libraries' use of changed headings and their success in maintaining currency and consistency, and the systems needed to support the current pace of heading changes.
    Date
    10. 9.2000 17:38:22
  12. Warr, W.A.: Social software : fun and games, or business tools? (2009) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 3663) [ClassicSimilarity], result of:
            0.038116705 = score(doc=3663,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 3663, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3663)
          0.049491543 = weight(_text_:22 in 3663) [ClassicSimilarity], result of:
            0.049491543 = score(doc=3663,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 3663, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3663)
      0.5 = coord(1/2)
    
    Abstract
    This is the era of social networking, collective intelligence, participation, collaborative creation, and borderless distribution. Every day we are bombarded with more publicity about collaborative environments, news feeds, blogs, wikis, podcasting, webcasting, folksonomies, social bookmarking, social citations, collaborative filtering, recommender systems, media sharing, massive multiplayer online games, virtual worlds, and mash-ups. This sort of anarchic environment appeals to the digital natives, but which of these so-called 'Web 2.0' technologies are going to have a real business impact? This paper addresses the impact that issues such as quality control, security, privacy and bandwidth may have on the implementation of social networking in hide-bound, large organizations.
    Date
    8. 7.2010 19:24:22
  13. Moore, R.W.: Management of very large distributed shared collections (2009) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 3845) [ClassicSimilarity], result of:
            0.038116705 = score(doc=3845,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 3845, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3845)
          0.049491543 = weight(_text_:22 in 3845) [ClassicSimilarity], result of:
            0.049491543 = score(doc=3845,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 3845, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3845)
      0.5 = coord(1/2)
    
    Abstract
    Large scientific collections may be managed as data grids for sharing data, digital libraries for publishing data, persistent archives for preserving data, or as real-time data repositories for sensor data. Despite the multiple types of data management objectives, it is possible to build each system from generic software infrastructure. This entry examines the requirements driving the management of large data collections, the concepts on which current data management systems are based, and the current research initiatives for managing distributed data collections.
    Date
    27. 8.2011 14:22:57
  14. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.04
    0.041441064 = product of:
      0.08288213 = sum of:
        0.08288213 = product of:
          0.24864638 = sum of:
            0.24864638 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.24864638 = score(doc=2918,freq=2.0), product of:
                0.4424171 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052184064 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  15. Lauser, B.; Johannsen, G.; Caracciolo, C.; Hage, W.R. van; Keizer, J.; Mayr, P.: Comparing human and automatic thesaurus mapping approaches in the agricultural domain (2008) 0.04
    0.041254148 = product of:
      0.082508296 = sum of:
        0.082508296 = sum of:
          0.047157194 = weight(_text_:systems in 2627) [ClassicSimilarity], result of:
            0.047157194 = score(doc=2627,freq=6.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.29405114 = fieldWeight in 2627, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
          0.0353511 = weight(_text_:22 in 2627) [ClassicSimilarity], result of:
            0.0353511 = score(doc=2627,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.19345059 = fieldWeight in 2627, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge organization systems (KOS), like thesauri and other controlled vocabularies, are used to provide subject access to information systems across the web. Due to the heterogeneity of these systems, mapping between vocabularies becomes crucial for retrieving relevant information. However, mapping thesauri is a laborious task, and thus big efforts are being made to automate the mapping process. This paper examines two mapping approaches involving the agricultural thesaurus AGROVOC, one machine-created and one human created. We are addressing the basic question "What are the pros and cons of human and automatic mapping and how can they complement each other?" By pointing out the difficulties in specific cases or groups of cases and grouping the sample into simple and difficult types of mappings, we show the limitations of current automatic methods and come up with some basic recommendations on what approach to use when.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  16. Beghtol, C.: Naïve classification systems and the global information society (2004) 0.04
    0.041254148 = product of:
      0.082508296 = sum of:
        0.082508296 = sum of:
          0.047157194 = weight(_text_:systems in 3483) [ClassicSimilarity], result of:
            0.047157194 = score(doc=3483,freq=6.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.29405114 = fieldWeight in 3483, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3483)
          0.0353511 = weight(_text_:22 in 3483) [ClassicSimilarity], result of:
            0.0353511 = score(doc=3483,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.19345059 = fieldWeight in 3483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3483)
      0.5 = coord(1/2)
    
    Abstract
    Classification is an activity that transcends time and space and that bridges the divisions between different languages and cultures, including the divisions between academic disciplines. Classificatory activity, however, serves different purposes in different situations. Classifications for infonnation retrieval can be called "professional" classifications and classifications in other fields can be called "naïve" classifications because they are developed by people who have no particular interest in classificatory issues. The general purpose of naïve classification systems is to discover new knowledge. In contrast, the general purpose of information retrieval classifications is to classify pre-existing knowledge. Different classificatory purposes may thus inform systems that are intended to span the cultural specifics of the globalized information society. This paper builds an previous research into the purposes and characteristics of naïve classifications. It describes some of the relationships between the purpose and context of a naive classification, the units of analysis used in it, and the theory that the context and the units of analysis imply.
    Pages
    S.19-22
  17. Sauperl, A.: Precoordination or not? : a new view of the old question (2009) 0.04
    0.041254148 = product of:
      0.082508296 = sum of:
        0.082508296 = sum of:
          0.047157194 = weight(_text_:systems in 3611) [ClassicSimilarity], result of:
            0.047157194 = score(doc=3611,freq=6.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.29405114 = fieldWeight in 3611, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3611)
          0.0353511 = weight(_text_:22 in 3611) [ClassicSimilarity], result of:
            0.0353511 = score(doc=3611,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.19345059 = fieldWeight in 3611, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3611)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This paper aims to discuss some long-standing issues of the development of a subject heading language as pre- or postcoordinated. Design/methodology/approach - In a review of literature on pre- and postcoordination and user behaviour, 20 criteria originally discussed by Svenonius are considered. Findings - The advantages and disadvantages of pre- and postcoordinated systems are on a very similar level. Most subject heading languages developed recently are precoordinated. They all require investments in highly skilled intellectual work, and are therefore expensive and difficult to maintain. Postcoordinated systems seem to have more advantages for information providers, but less for users. However, most of these disadvantages could be overcome by known information retrieval models and techniques. Research limitations/implications - The criteria originally discussed by Svenonius are difficult to evaluate in an exact manner. Some of them are also irrelevant because of changes in information retrieval systems. Practical implications - It was found that the decision on whether to use a pre- or postcoordinated system cannot be taken independent of consideration of the subject authority file and the functions of an information retrieval system, which should support users on one hand and information providers and indexers on the other. Originality/value - This literature review brings together some findings that have not been considered together previously.
    Date
    20. 6.2010 14:22:43
  18. RAK-NBM : Interpretationshilfe zu NBM 3b,3 (2000) 0.04
    0.03999521 = product of:
      0.07999042 = sum of:
        0.07999042 = product of:
          0.15998083 = sum of:
            0.15998083 = weight(_text_:22 in 4362) [ClassicSimilarity], result of:
              0.15998083 = score(doc=4362,freq=4.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.8754574 = fieldWeight in 4362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4362)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2000 19:22:27
  19. Diederichs, A.: Wissensmanagement ist Macht : Effektiv und kostenbewußt arbeiten im Informationszeitalter (2005) 0.04
    0.03999521 = product of:
      0.07999042 = sum of:
        0.07999042 = product of:
          0.15998083 = sum of:
            0.15998083 = weight(_text_:22 in 3211) [ClassicSimilarity], result of:
              0.15998083 = score(doc=3211,freq=4.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.8754574 = fieldWeight in 3211, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3211)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2005 9:16:22
  20. Hawking, D.; Robertson, S.: On collection size and retrieval effectiveness (2003) 0.04
    0.03999521 = product of:
      0.07999042 = sum of:
        0.07999042 = product of:
          0.15998083 = sum of:
            0.15998083 = weight(_text_:22 in 4109) [ClassicSimilarity], result of:
              0.15998083 = score(doc=4109,freq=4.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.8754574 = fieldWeight in 4109, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4109)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2005 14:22:22

Languages

Types

Themes