Search (430 results, page 1 of 22)

  • × year_i:[2020 TO 2030}
  1. Smith, A.: Simple Knowledge Organization System (SKOS) (2022) 0.07
    0.07151043 = product of:
      0.14302085 = sum of:
        0.07803083 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.07803083 = score(doc=1094,freq=4.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.051847253 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.051847253 = score(doc=1094,freq=6.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.01314276 = product of:
          0.03942828 = sum of:
            0.03942828 = weight(_text_:system in 1094) [ClassicSimilarity], result of:
              0.03942828 = score(doc=1094,freq=4.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.29527056 = fieldWeight in 1094, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1094)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    SKOS (Simple Knowledge Organization System) is a recommendation from the World Wide Web Consortium (W3C) for representing controlled vocabularies, taxonomies, thesauri, classifications, and similar systems for organizing and indexing information as linked data elements in the Semantic Web, using the Resource Description Framework (RDF). The SKOS data model is centered on "concepts", which can have preferred and alternate labels in any language as well as other metadata, and which are identified by addresses on the World Wide Web (URIs). Concepts are grouped into hierarchies through "broader" and "narrower" relations, with "top concepts" at the broadest conceptual level. Concepts are also organized into "concept schemes", also identified by URIs. Other relations, mappings, and groupings are also supported. This article discusses the history of the development of SKOS and provides notes on adoption, uses, and limitations.
  2. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.06
    0.063822925 = product of:
      0.12764585 = sum of:
        0.036784086 = weight(_text_:wide in 79) [ClassicSimilarity], result of:
          0.036784086 = score(doc=79,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.1958137 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.08466621 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.08466621 = score(doc=79,freq=36.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.0061955573 = product of:
          0.018586671 = sum of:
            0.018586671 = weight(_text_:system in 79) [ClassicSimilarity], result of:
              0.018586671 = score(doc=79,freq=2.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.13919188 = fieldWeight in 79, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=79)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  3. Peters, I.: Folksonomies & Social Tagging (2023) 0.04
    0.041620206 = product of:
      0.124860615 = sum of:
        0.06437215 = weight(_text_:wide in 796) [ClassicSimilarity], result of:
          0.06437215 = score(doc=796,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.342674 = fieldWeight in 796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
        0.060488462 = weight(_text_:web in 796) [ClassicSimilarity], result of:
          0.060488462 = score(doc=796,freq=6.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.43716836 = fieldWeight in 796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
      0.33333334 = coord(2/6)
    
    Abstract
    Die Erforschung und der Einsatz von Folksonomies und Social Tagging als nutzerzentrierte Formen der Inhaltserschließung und Wissensrepräsentation haben in den 10 Jahren ab ca. 2005 ihren Höhenpunkt erfahren. Motiviert wurde dies durch die Entwicklung und Verbreitung des Social Web und der wachsenden Nutzung von Social-Media-Plattformen (s. Kapitel E 8 Social Media und Social Web). Beides führte zu einem rasanten Anstieg der im oder über das World Wide Web auffindbaren Menge an potenzieller Information und generierte eine große Nachfrage nach skalierbaren Methoden der Inhaltserschließung.
  4. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.04
    0.03849238 = product of:
      0.07698476 = sum of:
        0.045980107 = weight(_text_:wide in 1012) [ClassicSimilarity], result of:
          0.045980107 = score(doc=1012,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.24476713 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.02143089 = weight(_text_:retrieval in 1012) [ClassicSimilarity], result of:
          0.02143089 = score(doc=1012,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.16710453 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.009573761 = product of:
          0.028721282 = sum of:
            0.028721282 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.028721282 = score(doc=1012,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
  5. Breuer, T.; Tavakolpoursaleh, N.; Schaer, P.; Hienert, D.; Schaible, J.; Castro, L.J.: Online Information Retrieval Evaluation using the STELLA Framework (2022) 0.04
    0.03779839 = product of:
      0.07559678 = sum of:
        0.029934023 = weight(_text_:web in 640) [ClassicSimilarity], result of:
          0.029934023 = score(doc=640,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.21634221 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.036369424 = weight(_text_:retrieval in 640) [ClassicSimilarity], result of:
          0.036369424 = score(doc=640,freq=4.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.2835858 = fieldWeight in 640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.0092933355 = product of:
          0.027880006 = sum of:
            0.027880006 = weight(_text_:system in 640) [ClassicSimilarity], result of:
              0.027880006 = score(doc=640,freq=2.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.20878783 = fieldWeight in 640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=640)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    Involving users in early phases of software development has become a common strategy as it enables developers to consider user needs from the beginning. Once a system is in production, new opportunities to observe, evaluate and learn from users emerge as more information becomes available. Gathering information from users to continuously evaluate their behavior is a common practice for commercial software, while the Cranfield paradigm remains the preferred option for Information Retrieval (IR) and recommendation systems in the academic world. Here we introduce the Infrastructures for Living Labs STELLA project which aims to create an evaluation infrastructure allowing experimental systems to run along production web-based academic search systems with real users. STELLA combines user interactions and log files analyses to enable large-scale A/B experiments for academic search.
  6. Lewandowski, D.: Suchmaschinen verstehen : 3. vollständig überarbeitete und erweiterte Aufl. (2021) 0.03
    0.03343443 = product of:
      0.10030328 = sum of:
        0.065025695 = weight(_text_:wide in 4016) [ClassicSimilarity], result of:
          0.065025695 = score(doc=4016,freq=4.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.34615302 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
        0.035277586 = weight(_text_:web in 4016) [ClassicSimilarity], result of:
          0.035277586 = score(doc=4016,freq=4.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.25496176 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
      0.33333334 = coord(2/6)
    
    RSWK
    World Wide Web Recherche
    Subject
    World Wide Web Recherche
  7. Lewandowski, D.: Suchmaschinen (2023) 0.03
    0.032503076 = product of:
      0.09750923 = sum of:
        0.055176124 = weight(_text_:wide in 793) [ClassicSimilarity], result of:
          0.055176124 = score(doc=793,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.29372054 = fieldWeight in 793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
        0.042333104 = weight(_text_:web in 793) [ClassicSimilarity], result of:
          0.042333104 = score(doc=793,freq=4.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.3059541 = fieldWeight in 793, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
      0.33333334 = coord(2/6)
    
    Abstract
    Eine Suchmaschine (auch: Web-Suchmaschine, Universalsuchmaschine) ist ein Computersystem, das Inhalte aus dem World Wide Web (WWW) mittels Crawling erfasst und über eine Benutzerschnittstelle durchsuchbar macht, wobei die Ergebnisse in einer nach systemseitig angenommener Relevanz geordneten Darstellung aufgeführt werden. Dies bedeutet, dass Suchmaschinen im Gegensatz zu anderen Informationssystemen nicht auf einem klar abgegrenzten Datenbestand aufbauen, sondern diesen aus den verstreut vorliegenden Dokumenten des WWW zusammenstellen. Dieser Datenbestand wird über eine Benutzerschnittstelle zugänglich gemacht, die so gestaltet ist, dass die Suchmaschine von Laien problemlos genutzt werden kann. Die zu einer Suchanfrage ausgegebenen Treffer werden so sortiert, dass den Nutzenden die aus Systemsicht relevantesten Dokumente zuerst angezeigt werden. Dabei handelt es sich um komplexe Bewertungsverfahren, denen zahlreiche Annahmen über die Relevanz von Dokumenten in Bezug auf Suchanfragen zugrunde liegen.
  8. Soshnikov, D.: ROMEO: an ontology-based multi-agent architecture for online information retrieval (2021) 0.03
    0.0294682 = product of:
      0.088404596 = sum of:
        0.03991203 = weight(_text_:web in 249) [ClassicSimilarity], result of:
          0.03991203 = score(doc=249,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.2884563 = fieldWeight in 249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
        0.048492566 = weight(_text_:retrieval in 249) [ClassicSimilarity], result of:
          0.048492566 = score(doc=249,freq=4.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.37811437 = fieldWeight in 249, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper describes an approach to path-finding in the intelligent graphs, with vertices being intelligent agents. A possible implementation of this approach is described, based on logical inference in distributed frame hierarchy. Presented approach can be used for implementing distributed intelligent information systems that include automatic navigation and path generation in hypertext, which can be used, for example in distance education, as well as for organizing intelligent web catalogues with flexible ontology-based information retrieval.
  9. Araújo, P.C. de; Gutierres Castanha, R.C.; Hjoerland, B.: Citation indexing and indexes (2021) 0.03
    0.028958794 = product of:
      0.08687638 = sum of:
        0.042333104 = weight(_text_:web in 444) [ClassicSimilarity], result of:
          0.042333104 = score(doc=444,freq=4.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.3059541 = fieldWeight in 444, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=444)
        0.04454327 = weight(_text_:retrieval in 444) [ClassicSimilarity], result of:
          0.04454327 = score(doc=444,freq=6.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.34732026 = fieldWeight in 444, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=444)
      0.33333334 = coord(2/6)
    
    Abstract
    A citation index is a bibliographic database that provides citation links between documents. The first modern citation index was suggested by the researcher Eugene Garfield in 1955 and created by him in 1964, and it represents an important innovation to knowledge organization and information retrieval. This article describes citation indexes in general, considering the modern citation indexes, including Web of Science, Scopus, Google Scholar, Microsoft Academic, Crossref, Dimensions and some special citation indexes and predecessors to the modern citation index like Shepard's Citations. We present comparative studies of the major ones and survey theoretical problems related to the role of citation indexes as subject access points (SAP), recognizing the implications to knowledge organization and information retrieval. Finally, studies on citation behavior are presented and the influence of citation indexes on knowledge organization, information retrieval and the scientific information ecosystem is recognized.
    Object
    Web of Science
  10. Fernanda de Jesus, A.; Ferreira de Castro, F.: Proposal for the publication of linked open bibliographic data (2024) 0.03
    0.028370049 = product of:
      0.08511014 = sum of:
        0.055176124 = weight(_text_:wide in 1161) [ClassicSimilarity], result of:
          0.055176124 = score(doc=1161,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.29372054 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
        0.029934023 = weight(_text_:web in 1161) [ClassicSimilarity], result of:
          0.029934023 = score(doc=1161,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.21634221 = fieldWeight in 1161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1161)
      0.33333334 = coord(2/6)
    
    Abstract
    Linked Open Data (LOD) are a set of principles for publishing structured, connected data available for reuse under an open license. The objective of this paper is to analyze the publishing of bibliographic data such as LOD, having as a product the elaboration of theoretical-methodological recommendations for the publication of these data, in an approach based on the ten best practices for publishing LOD, from the World Wide Web Consortium. The starting point was the conduction of a Systematic Review of Literature, where initiatives to publish bibliographic data such as LOD were identified. An empirical study of these institutions was also conducted. As a result, theoretical-methodological recommendations were obtained for the process of publishing bibliographic data such as LOD.
  11. Ostani, M.M.; Sohrabi, M.C.; Taheri, S.M.; Asemi, A.: Localization of Schema.org for manuscript description in the Iranian-Islamic information context (2021) 0.03
    0.027511153 = product of:
      0.08253346 = sum of:
        0.06110257 = weight(_text_:web in 585) [ClassicSimilarity], result of:
          0.06110257 = score(doc=585,freq=12.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.4416067 = fieldWeight in 585, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=585)
        0.02143089 = weight(_text_:retrieval in 585) [ClassicSimilarity], result of:
          0.02143089 = score(doc=585,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.16710453 = fieldWeight in 585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=585)
      0.33333334 = coord(2/6)
    
    Abstract
    This study aims to assess the localization of Schema.org for manuscript description in the Iranian-Islamic information context using documentary and qualitative content analysis. The schema.org introduces schemas for different Web content objects so as to generate structured data. Given that the structure of Schema.org is ontological, the inheritance of the manuscript types from the properties of their parent types, as well as the localization and description of the specific properties of the manuscripts in the Iranian-Islamic information context were investigated in order to improve their indexability and semantic visibility in the Web search engines. The proposed properties specific to the manuscript type and the six proposed properties to be added to the "CreativeWork" type are found to be consistent with other schema properties. In turn, these properties lead to the localization of the existing schema for the manuscript type compatibility with the Iranian-Islamic information context. This schema is also applicable to centers with published records on the Web, and if markup with these properties, their indexability and semantic visibility in Web search engines increases accordingly. The generation of structured data in the Web environment through this schema is deemed to promote the concept of the Semantic Web, and make data and knowledge retrieval easier.
  12. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.027020078 = product of:
      0.08106023 = sum of:
        0.056115206 = product of:
          0.16834562 = sum of:
            0.16834562 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.16834562 = score(doc=1000,freq=2.0), product of:
                0.35944527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042397358 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.02494502 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.02494502 = score(doc=1000,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.33333334 = coord(2/6)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  13. Bergman, O.; Israeli, T.; Whittaker, S.: Factors hindering shared files retrieval (2020) 0.03
    0.025781393 = product of:
      0.07734418 = sum of:
        0.06777042 = weight(_text_:retrieval in 5843) [ClassicSimilarity], result of:
          0.06777042 = score(doc=5843,freq=20.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.5284309 = fieldWeight in 5843, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5843)
        0.009573761 = product of:
          0.028721282 = sum of:
            0.028721282 = weight(_text_:22 in 5843) [ClassicSimilarity], result of:
              0.028721282 = score(doc=5843,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.19345059 = fieldWeight in 5843, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5843)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose Personal information management (PIM) is an activity in which people store information items in order to retrieve them later. The purpose of this paper is to test and quantify the effect of factors related to collection size, file properties and workload on file retrieval success and efficiency. Design/methodology/approach In the study, 289 participants retrieved 1,557 of their shared files in a naturalistic setting. The study used specially developed software designed to collect shared files' names and present them as targets for the retrieval task. The dependent variables were retrieval success, retrieval time and misstep/s. Findings Various factors compromise shared files retrieval including: collection size (large number of files), file properties (multiple versions, size of team sharing the file, time since most recent retrieval and folder depth) and workload (daily e-mails sent and received). The authors discuss theoretical reasons for these negative effects and suggest possible ways to overcome them. Originality/value Retrieval is the main reason people manage personal information. It is essential for retrieval to be successful and efficient, as information cannot be used unless it can be re-accessed. Prior PIM research has assumed that factors related to collection size, file properties and workload affect file retrieval. However, this is the first study to systematically quantify the negative effects of these factors. As each of these factors is expected to be exacerbated in the future, this study is a necessary first step toward addressing these problems.
    Date
    20. 1.2015 18:30:22
  14. Petras, V.; Womser-Hacker, C.: Evaluation im Information Retrieval (2023) 0.03
    0.025363928 = product of:
      0.07609178 = sum of:
        0.057505112 = weight(_text_:retrieval in 808) [ClassicSimilarity], result of:
          0.057505112 = score(doc=808,freq=10.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.44838852 = fieldWeight in 808, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=808)
        0.018586671 = product of:
          0.05576001 = sum of:
            0.05576001 = weight(_text_:system in 808) [ClassicSimilarity], result of:
              0.05576001 = score(doc=808,freq=8.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.41757566 = fieldWeight in 808, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=808)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Das Ziel einer Evaluation ist die Überprüfung, ob bzw. in welchem Ausmaß ein Informationssystem die an das System gestellten Anforderungen erfüllt. Informationssysteme können aus verschiedenen Perspektiven evaluiert werden. Für eine ganzheitliche Evaluation (als Synonym wird auch Evaluierung benutzt), die unterschiedliche Qualitätsaspekte betrachtet (z. B. wie gut ein System relevante Dokumente rankt, wie schnell ein System die Suche durchführt, wie die Ergebnispräsentation gestaltet ist oder wie Suchende durch das System geführt werden) und die Erfüllung mehrerer Anforderungen überprüft, empfiehlt es sich, sowohl eine perspektivische als auch methodische Triangulation (d. h. der Einsatz von mehreren Ansätzen zur Qualitätsüberprüfung) vorzunehmen. Im Information Retrieval (IR) konzentriert sich die Evaluation auf die Qualitätseinschätzung der Suchfunktion eines Information-Retrieval-Systems (IRS), wobei oft zwischen systemzentrierter und nutzerzentrierter Evaluation unterschieden wird. Dieses Kapitel setzt den Fokus auf die systemzentrierte Evaluation, während andere Kapitel dieses Handbuchs andere Evaluationsansätze diskutieren (s. Kapitel C 4 Interaktives Information Retrieval, C 7 Cross-Language Information Retrieval und D 1 Information Behavior).
  15. Alipour, O.; Soheili, F.; Khasseh, A.A.: ¬A co-word analysis of global research on knowledge organization: 1900-2019 (2022) 0.02
    0.02452757 = product of:
      0.07358271 = sum of:
        0.02822207 = weight(_text_:web in 1106) [ClassicSimilarity], result of:
          0.02822207 = score(doc=1106,freq=4.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.2039694 = fieldWeight in 1106, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1106)
        0.045360643 = weight(_text_:retrieval in 1106) [ClassicSimilarity], result of:
          0.045360643 = score(doc=1106,freq=14.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.3536936 = fieldWeight in 1106, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1106)
      0.33333334 = coord(2/6)
    
    Abstract
    The study's objective is to analyze the structure of knowledge organization studies conducted worldwide. This applied research has been conducted with a scientometrics approach using the co-word analysis. The research records consisted of all articles published in the journals of Knowledge Organization and Cataloging & Classification Quarterly and keywords related to the field of knowledge organization indexed in Web of Science from 1900 to 2019, in which 17,950 records were analyzed entirely with plain text format. The total number of keywords was 25,480, which was reduced to 12,478 keywords after modifications and removal of duplicates. Then, 115 keywords with a frequency of at least 18 were included in the final analysis, and finally, the co-word network was drawn. BibExcel, UCINET, VOSviewer, and SPSS software were used to draw matrices, analyze co-word networks, and draw dendrograms. Furthermore, strategic diagrams were drawn using Excel software. The keywords "information retrieval," "classification," and "ontology" are among the most frequently used keywords in knowledge organization articles. Findings revealed that "Ontology*Semantic Web", "Digital Library*Information Retrieval" and "Indexing*Information Retrieval" are highly frequent co-word pairs, respectively. The results of hierarchical clustering indicated that the global research on knowledge organization consists of eight main thematic clusters; the largest is specified for the topic of "classification, indexing, and information retrieval." The smallest clusters deal with the topics of "data processing" and "theoretical concepts of information and knowledge organization" respectively. Cluster 1 (cataloging standards and knowledge organization) has the highest density, while Cluster 5 (classification, indexing, and information retrieval) has the highest centrality. According to the findings of this research, the keyword "information retrieval" has played a significant role in knowledge organization studies, both as a keyword and co-word pair. In the co-word section, there is a type of related or general topic relationship between co-word pairs. Results indicated that information retrieval is one of the main topics in knowledge organization, while the theoretical concepts of knowledge organization have been neglected. In general, the co-word structure of knowledge organization research indicates the multiplicity of global concepts and topics studied in this field globally.
  16. Hasanain, M.; Elsayed, T.: Studying effectiveness of Web search for fact checking (2022) 0.02
    0.024504632 = product of:
      0.073513895 = sum of:
        0.043206044 = weight(_text_:web in 558) [ClassicSimilarity], result of:
          0.043206044 = score(doc=558,freq=6.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.3122631 = fieldWeight in 558, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=558)
        0.030307854 = weight(_text_:retrieval in 558) [ClassicSimilarity], result of:
          0.030307854 = score(doc=558,freq=4.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.23632148 = fieldWeight in 558, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=558)
      0.33333334 = coord(2/6)
    
    Abstract
    Web search is commonly used by fact checking systems as a source of evidence for claim verification. In this work, we demonstrate that the task of retrieving pages useful for fact checking, called evidential pages, is indeed different from the task of retrieving topically relevant pages that are typically optimized by search engines; thus, it should be handled differently. We conduct a comprehensive study on the performance of retrieving evidential pages over a test collection we developed for the task of re-ranking Web pages by usefulness for fact-checking. Results show that pages (retrieved by a commercial search engine) that are topically relevant to a claim are not always useful for verifying it, and that the engine's performance in retrieving evidential pages is weakly correlated with retrieval of topically relevant pages. Additionally, we identify types of evidence in evidential pages and some linguistic cues that can help predict page usefulness. Moreover, preliminary experiments show that a retrieval model leveraging those cues has a higher performance compared to the search engine. Finally, we show that existing systems have a long way to go to support effective fact checking. To that end, our work provides insights to guide design of better future systems for the task.
  17. Lee, H.S.; Arnott Smith, C.: ¬A comparative mixed methods study on health information seeking among US-born/US-dwelling, Korean-born/US-dwelling, and Korean-born/Korean-dwelling mothers (2022) 0.02
    0.023641711 = product of:
      0.07092513 = sum of:
        0.045980107 = weight(_text_:wide in 614) [ClassicSimilarity], result of:
          0.045980107 = score(doc=614,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.24476713 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
        0.02494502 = weight(_text_:web in 614) [ClassicSimilarity], result of:
          0.02494502 = score(doc=614,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.18028519 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
      0.33333334 = coord(2/6)
    
    Abstract
    More knowledge and a better understanding of health information seeking are necessary, especially in these unprecedented times due to the COVID-19 pandemic. Using Sonnenwald's theoretical concept of information horizons, this study aimed to uncover patterns in mothers' source preferences related to their children's health. Online surveys were completed by 851 mothers (255 US-born/US-dwelling, 300 Korean-born/US-dwelling, and 296 Korean-born/Korean-dwelling), and supplementary in-depth interviews with 24 mothers were conducted and analyzed. Results indicate that there were remarkable differences between the mothers' information source preference and their actual source use. Moreover, there were many similarities between the two Korean-born groups concerning health information-seeking behavior. For instance, those two groups sought health information more frequently than US-born/US-dwelling mothers. Their sources frequently included blogs or online forums as well as friends with children, whereas US-born/US-dwelling mothers frequently used doctors or nurses as information sources. Mothers in the two Korean-born samples preferred the World Wide Web most as their health information source, while the US-born/US-dwelling mothers preferred doctors the most. Based on these findings, information professionals should guide mothers of specific ethnicities and nationalities to trustworthy sources considering both their usage and preferences.
  18. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.02
    0.023558777 = product of:
      0.07067633 = sum of:
        0.06110257 = weight(_text_:web in 992) [ClassicSimilarity], result of:
          0.06110257 = score(doc=992,freq=12.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.4416067 = fieldWeight in 992, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=992)
        0.009573761 = product of:
          0.028721282 = sum of:
            0.028721282 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
              0.028721282 = score(doc=992,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.19345059 = fieldWeight in 992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
    Object
    Web of Science
  19. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.02
    0.021925291 = product of:
      0.06577587 = sum of:
        0.056115206 = product of:
          0.16834562 = sum of:
            0.16834562 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.16834562 = score(doc=5669,freq=2.0), product of:
                0.35944527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042397358 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.009660665 = product of:
          0.028981995 = sum of:
            0.028981995 = weight(_text_:29 in 5669) [ClassicSimilarity], result of:
              0.028981995 = score(doc=5669,freq=2.0), product of:
                0.14914064 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042397358 = queryNorm
                0.19432661 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  20. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.02
    0.021448843 = product of:
      0.06434653 = sum of:
        0.024246283 = weight(_text_:retrieval in 566) [ClassicSimilarity], result of:
          0.024246283 = score(doc=566,freq=4.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.18905719 = fieldWeight in 566, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=566)
        0.040100247 = product of:
          0.06015037 = sum of:
            0.037173342 = weight(_text_:system in 566) [ClassicSimilarity], result of:
              0.037173342 = score(doc=566,freq=8.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.27838376 = fieldWeight in 566, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=566)
            0.022977026 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
              0.022977026 = score(doc=566,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.15476047 = fieldWeight in 566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=566)
          0.6666667 = coord(2/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Knowledge Architectures reviews traditional approaches to managing information and explains why they need to adapt to support 21st-century information management and discovery. Exploring the rapidly changing environment in which information is being managed and accessed, the book considers how to use knowledge architectures, the basic structures and designs that underlie all of the parts of an effective information system, to best advantage. Drawing on 40 years of work with a variety of organizations, Bedford explains that failure to understand the structure behind any given system can be the difference between an effective solution and a significant and costly failure. Demonstrating that the information user environment has shifted significantly in the past 20 years, the book explains that end users now expect designs and behaviors that are much closer to the way they think, work, and act. Acknowledging how important it is that those responsible for developing an information or knowledge management system understand knowledge structures, the book goes beyond a traditional library science perspective and uses case studies to help translate the abstract and theoretical to the practical and concrete. Explaining the structures in a simple and intuitive way and providing examples that clearly illustrate the challenges faced by a range of different organizations, Knowledge Architectures is essential reading for those studying and working in library and information science, data science, systems development, database design, and search system architecture and engineering.
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
    LCSH
    Information storage and retrieval systems / Management
    Subject
    Information storage and retrieval systems / Management

Languages

  • e 320
  • d 104
  • pt 4
  • More… Less…

Types

  • a 391
  • el 76
  • m 13
  • p 9
  • x 3
  • s 2
  • A 1
  • EL 1
  • More… Less…