Search (298 results, page 2 of 15)

  • × type_ss:"a"
  • × type_ss:"el"
  1. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.02
    0.017972346 = product of:
      0.08087556 = sum of:
        0.02300799 = weight(_text_:data in 3109) [ClassicSimilarity], result of:
          0.02300799 = score(doc=3109,freq=4.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.19762816 = fieldWeight in 3109, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
        0.057867568 = weight(_text_:germany in 3109) [ClassicSimilarity], result of:
          0.057867568 = score(doc=3109,freq=2.0), product of:
            0.21956629 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.036818076 = queryNorm
            0.26355398 = fieldWeight in 3109, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
      0.22222222 = coord(2/9)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  2. Edmunds, J.: Roadmap to nowhere : BIBFLOW, BIBFRAME, and linked data for libraries (2017) 0.02
    0.017613193 = product of:
      0.079259366 = sum of:
        0.036990993 = weight(_text_:bibliographic in 3523) [ClassicSimilarity], result of:
          0.036990993 = score(doc=3523,freq=2.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.2580748 = fieldWeight in 3523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3523)
        0.042268377 = weight(_text_:data in 3523) [ClassicSimilarity], result of:
          0.042268377 = score(doc=3523,freq=6.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.3630661 = fieldWeight in 3523, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3523)
      0.22222222 = coord(2/9)
    
    Abstract
    On December 12, 2016, Carl Stahmer and MacKenzie Smith presented at the CNI Members Fall Meeting about the BIBFLOW project, self-described on Twitter as "a two-year project of the UC Davis University Library and Zepheira investigating the future of library technical services." In her opening remarks, Ms. Smith, University Librarian at UC Davis, stated that one of the goals of the project was to devise a roadmap "to get from where we are today, which is kind of the 1970s with a little lipstick on it, to 2020, which is where we're going to be very soon." The notion that where libraries are today is somehow behind the times is one of the commonly heard rationales behind a move to linked data. Stated more precisely: - Libraries devote considerable time and resources to producing high-quality bibliographic metadata - This metadata is stored in unconnected silos - This metadata is in a format (MARC) that is incompatible with technologies of the emerging Semantic Web - The visibility of library metadata is diminished as a result of the two points above Are these assertions true? If yes, is linked data the solution?
  3. Edmunds, J.: Zombrary apocalypse!? : RDA, LRM, and the death of cataloging (2017) 0.02
    0.01703892 = product of:
      0.07667515 = sum of:
        0.060406037 = weight(_text_:bibliographic in 3818) [ClassicSimilarity], result of:
          0.060406037 = score(doc=3818,freq=12.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.42143437 = fieldWeight in 3818, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=3818)
        0.016269106 = weight(_text_:data in 3818) [ClassicSimilarity], result of:
          0.016269106 = score(doc=3818,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.1397442 = fieldWeight in 3818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=3818)
      0.22222222 = coord(2/9)
    
    Abstract
    A brochure on RDA issued in 2010 includes the statements that "RDA goes beyond earlier cataloguing codes in that it provides guidelines on cataloguing digital resources and a stronger emphasis on helping users find, identify, select, and obtain the information they want. RDA also supports clustering of bibliographic records to show relationships between works and their creators. This important new feature makes users more aware of a work's different editions, translations, or physical formats - an exciting development." Setting aside the fact that the author(s) of these statements and I differ on the definition of exciting, their claims are, at best, dubious. There is no evidence-empirical or anecdotal-that bibliographic records created using RDA are any better than records created using AACR2 (or AACR, for that matter) in "helping users find, identify, select, and obtain the information they want." The claim is especially unfounded in the context of the current discovery ecosystem, in which users are perfectly capable of finding, identifying, selecting, and obtaining information with absolutely no assistance from libraries or the bibliographic data libraries create.
    Equally fallacious is the statement that support for the "clustering bibliographic records to show relationships between works and their creators" is an "important new feature" of RDA. AACR2 bibliographic records and the systems housing them can, did, and do show such relationships. Finally, whether users want or care to be made "more aware of a work's different editions, translations, or physical formats" is debatable. As an aim, it sounds less like what a user wants and more like what a cataloging librarian thinks a user should want. As Amanda Cossham writes in her recently issued doctoral thesis: "The explicit focus on user needs in the FRBR model, the International Cataloguing Principles, and RDA: Resource Description and Access does not align well with the ways that users use, understand, and experience library catalogues nor with the ways that they understand and experience the wider information environment. User tasks, as constituted in the FRBR model and RDA, are insufficient to meet users' needs." (p. 11, emphasis in the original)
    The point of this paper is not to critique RDA (a futile task, since RDA is here to stay), but to make plain that its claim to be a solution to the challenge(s) of bibliographic description in the Internet Age is unfounded, and, secondarily, to explain why such wild claims continue to be advanced and go unchallenged by the rank and file of career catalogers.
  4. Godby, C.J.; Young, J.A.; Childress, E.: ¬A repository of metadata crosswalks (2004) 0.02
    0.016889969 = product of:
      0.15200971 = sum of:
        0.15200971 = weight(_text_:readable in 1155) [ClassicSimilarity], result of:
          0.15200971 = score(doc=1155,freq=4.0), product of:
            0.2262076 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.036818076 = queryNorm
            0.67199206 = fieldWeight in 1155, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
      0.11111111 = coord(1/9)
    
    Abstract
    This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community.
  5. Tillett, B.B.: RDA, or, The long journey of the catalog to the digital age (2016) 0.02
    0.015917132 = product of:
      0.071627095 = sum of:
        0.04315616 = weight(_text_:bibliographic in 2945) [ClassicSimilarity], result of:
          0.04315616 = score(doc=2945,freq=2.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.30108726 = fieldWeight in 2945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2945)
        0.028470935 = weight(_text_:data in 2945) [ClassicSimilarity], result of:
          0.028470935 = score(doc=2945,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.24455236 = fieldWeight in 2945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2945)
      0.22222222 = coord(2/9)
    
    Abstract
    RDA was created in response to complaints about the Anglo-American Cataloguing Rules, especially the call for a more international, principle-based content standard that takes the perspective of the conceptual models of FRBR (Functional Requirements for Bibliographic Records) and FRAD (Functional Requirements for Authority Data). The past and ongoing process for continuous improvement to RDA is through the Joint Steering Committee for Development of RDA (known as the JSC, but recently renamed the RDA Steering Committee - RSC) to make RDA even more international and principle-based.
  6. Forero, D.; Peterson, N.; Hamilton, A.: Building an institutional author search tool (2019) 0.02
    0.015917132 = product of:
      0.071627095 = sum of:
        0.04315616 = weight(_text_:bibliographic in 5441) [ClassicSimilarity], result of:
          0.04315616 = score(doc=5441,freq=2.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.30108726 = fieldWeight in 5441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5441)
        0.028470935 = weight(_text_:data in 5441) [ClassicSimilarity], result of:
          0.028470935 = score(doc=5441,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.24455236 = fieldWeight in 5441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5441)
      0.22222222 = coord(2/9)
    
    Abstract
    Ability to collect time-specific lists of faculty publications has become increasingly important for academic departments. At OHSU publication lists had been retrieved manually by a librarian who conducted literature searches in bibliographic databases. These searches were complicated and time consuming, and the results were large and difficult to assess for accuracy. The OHSU library has built an open web page that allows novices to make very sophisticated institution-specific queries. The tool frees up library staff, provides users with an easy way of retrieving reliable local publication information from PubMed, and gives an opportunity for more sophisticated users to modify the algorithm or dive into the data to better understand nuances from a strong jumping off point.
  7. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.02
    0.015549123 = product of:
      0.069971055 = sum of:
        0.048807316 = weight(_text_:data in 1967) [ClassicSimilarity], result of:
          0.048807316 = score(doc=1967,freq=8.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.4192326 = fieldWeight in 1967, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.02116374 = product of:
          0.04232748 = sum of:
            0.04232748 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.04232748 = score(doc=1967,freq=4.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  8. Dousa, T.M.: E. Wyndham Hulme's classification of the attributes of books : On an early model of a core bibliographical entity (2017) 0.01
    0.01457565 = product of:
      0.06559043 = sum of:
        0.049321324 = weight(_text_:bibliographic in 3859) [ClassicSimilarity], result of:
          0.049321324 = score(doc=3859,freq=8.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.34409973 = fieldWeight in 3859, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=3859)
        0.016269106 = weight(_text_:data in 3859) [ClassicSimilarity], result of:
          0.016269106 = score(doc=3859,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.1397442 = fieldWeight in 3859, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=3859)
      0.22222222 = coord(2/9)
    
    Abstract
    Modelling bibliographical entities is a prominent activity within knowledge organization today. Current models of bibliographic entities, such as Functional Requirements for Bibliographical Records (FRBR) and the Bibliographic Framework (BIBFRAME), take inspiration from data - modelling methods developed by computer scientists from the mid - 1970s on. Thus, it would seem that the modelling of bibliographic entities is an activity of very recent vintage. However, it is possible to find examples of bibliographical models from earlier periods of knowledge organization. The purpose of this paper is to draw attention to one such model, outlined by the early 20th - century British classification theorist E. Wyndham Hulme in his essay on "Principles of Book Classification" (1911 - 1912). There, Hulme set forth a classification of various attributes by which books can conceivably be classified. These he first divided into accidental and inseparable attributes. Accidental attributes were subdivided into edition - level and copy - level attributes and inseparable attitudes, into physical and non - physical attributes. Comparison of Hulme's classification of attributes with those of FRBR and BIBFRAME 2.0 reveals that the different classes of attributes in Hulme's classification correspond to groups of attributes associated with different bibliographical entities in those models. These later models assume the existence of different bibliographic entities in an abstraction hierarchy among which attributes are distributed, whereas Hulme posited only a single entity - the book - , whose various aspects he clustered into different classes of attributes. Thus, Hulme's model offers an interesting alternative to current assumptions about how to conceptualize the relationship between attributes and entities in the bibliographical universe.
  9. Pitti, D.V.: Encoded Archival Description : an introduction and overview (1999) 0.01
    0.013643255 = product of:
      0.06139465 = sum of:
        0.036990993 = weight(_text_:bibliographic in 1152) [ClassicSimilarity], result of:
          0.036990993 = score(doc=1152,freq=2.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.2580748 = fieldWeight in 1152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=1152)
        0.024403658 = weight(_text_:data in 1152) [ClassicSimilarity], result of:
          0.024403658 = score(doc=1152,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.2096163 = fieldWeight in 1152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1152)
      0.22222222 = coord(2/9)
    
    Abstract
    Encoded Archival Description (EAD) is an emerging standard used internationally in an increasing number of archives and manuscripts libraries to encode data describing corporate records and personal papers. The individual descriptions are variously called finding aids, guides, handlists, or catalogs. While archival description shares many objectives with bibliographic description, it differs from it in several essential ways. From its inception, EAD was based on SGML, and, with the release of EAD version 1.0 in 1998, it is also compliant with XML. EAD was, and continues to be, developed by the archival community. While development was initiated in the United States, international interest and contribution are increasing. EAD is currently administered and maintained jointly by the Society of American Archivists and the United States Library of Congress. Developers are currently exploring ways to internationalize the administration and maintenance of EAD to reflect and represent the expanding base of users.
  10. Rockelle Strader, C.: Cataloging to support information literacy : the IFLA Library Reference Model's user tasks in the context of the Framework for Information Literacy for Higher Education (2021) 0.01
    0.013643255 = product of:
      0.06139465 = sum of:
        0.036990993 = weight(_text_:bibliographic in 713) [ClassicSimilarity], result of:
          0.036990993 = score(doc=713,freq=2.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.2580748 = fieldWeight in 713, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=713)
        0.024403658 = weight(_text_:data in 713) [ClassicSimilarity], result of:
          0.024403658 = score(doc=713,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.2096163 = fieldWeight in 713, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=713)
      0.22222222 = coord(2/9)
    
    Abstract
    Cataloging practices, as exemplified by the five user tasks of the IFLA Library Reference Model, can support information literacy practices. The six frames of the Framework for Information Literacy for Higher Education are used as lenses to examine the user tasks. Two themes emerge from this examination: context matters, and catalogers must tailor bibliographic descriptions to meet users' expectations and information needs. Catalogers need to solicit feedback from various user communities to reform cataloging practices to remain current and viable. Such conversations will enrich the catalog and enhance (reclaim?) its position as a primary tool for research and learning. Supplemental data for this article is available online at https://doi.org/10.1080/01639374.2021.1939828.
  11. Behrens, R.; Aliverti, C.; Schaffner, V.: RDA in Germany, Austria and German-speaking Switzerland : a new standard not only for libraries (2016) 0.01
    0.013639516 = product of:
      0.12275565 = sum of:
        0.12275565 = weight(_text_:germany in 2954) [ClassicSimilarity], result of:
          0.12275565 = score(doc=2954,freq=4.0), product of:
            0.21956629 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.036818076 = queryNorm
            0.5590824 = fieldWeight in 2954, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.046875 = fieldNorm(doc=2954)
      0.11111111 = coord(1/9)
    
    Abstract
    The library community in Germany, Austria and German-speaking Switzerland achieved a common goal at the end of 2015. After more than two years of intensive preparation, the international standard RDA was implemented and the practical work has now started. The article describes the project in terms of the political and organizational situation in the three countries, and points out the objectives which have been achieved as well as the work which is still outstanding. An overview is given of the initial efforts to align special materials with RDA in the German-speaking countries, and the tasks associated with the specific requirements arising from the multilingual nature of Switzerland are described. Furthermore, the article reports on the current strategic developments in the international RDA committees like the RDA Steering Committee (RSC) and the European RDA Interest Group (EURIG).
  12. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.01
    0.013089778 = product of:
      0.058904 = sum of:
        0.02273677 = weight(_text_:data in 1166) [ClassicSimilarity], result of:
          0.02273677 = score(doc=1166,freq=10.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.19529848 = fieldWeight in 1166, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
        0.03616723 = weight(_text_:germany in 1166) [ClassicSimilarity], result of:
          0.03616723 = score(doc=1166,freq=2.0), product of:
            0.21956629 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.036818076 = queryNorm
            0.16472124 = fieldWeight in 1166, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
      0.22222222 = coord(2/9)
    
    Abstract
    This article presents an overview of the LEAF project (Linking and Exploring Authority Files)1, which has set out to provide a framework for international, collaborative work in the sector of authority data with respect to authority control. Elaborating the virtues of authority control in today's Web environment is an almost futile exercise, since so much has been said and written about it in the last few years.2 The World Wide Web is generally understood to be poorly structured-both with regard to content and to locating required information. Highly structured databases might be viewed as small islands of precision within this chaotic environment. Though the Web in general or any particular structured database would greatly benefit from increased authority control, it should be noted that our following considerations only refer to authority control with regard to databases of "memory institutions" (i.e., libraries, archives, and museums). Moreover, when talking about authority records, we exclusively refer to personal name authority records that describe a specific person. Although different types of authority records could indeed be used in similar ways to the ones presented in this article, discussing those different types is outside the scope of both the LEAF project and this article. Personal name authority records-as are all other "authorities"-are maintained as separate records and linked to various kinds of descriptive records. Name authority records are usually either kept in independent databases or in separate tables in the database containing the descriptive records. This practice points at a crucial benefit: by linking any number of descriptive records to an authorized name record, the records related to this entity are collocated in the database. Variant forms of the authorized name are referenced in the authority records and thus ensure the consistency of the database while enabling search and retrieval operations that produce accurate results. On one hand, authority control may be viewed as a positive prerequisite of a consistent catalogue; on the other, the creation of new authority records is a very time consuming and expensive undertaking. As a consequence, various models of providing access to existing authority records have emerged: the Library of Congress and the French National Library (Bibliothèque nationale de France), for example, make their authority records available to all via a web-based search service.3 In Germany, the Personal Name Authority File (PND, Personennamendatei4) maintained by the German National Library (Die Deutsche Bibliothek, Frankfurt/Main) offers a different approach to shared access: within a closed network, participating institutions have online access to their pooled data. The number of recent projects and initiatives that have addressed the issue of authority control in one way or another is considerable.5 Two important current initiatives should be mentioned here: The Name Authority Cooperative (NACO) and Virtual International Authority File (VIAF).
    NACO was established in 1976 and is hosted by the Library of Congress. At the beginning of 2003, nearly 400 institutions were involved in this undertaking, including 43 institutions from outside the United States.6 Despite the enormous success of NACO and the impressive annual growth of the initiative, there are requirements for participation that form an obstacle for many institutions: they have to follow the Anglo-American Cataloguing Rules (AACR2) and employ the MARC217 data format. Participating institutions also have to belong to either OCLC (Online Computer Library Center) or RLG (Research Libraries Group) in order to be able to contribute records, and they have to provide a specified minimum number of authority records per year. A recent proof of concept project of the Library of Congress, OCLC and the German National Library-Virtual International Authority File (VIAF)8-will, in its first phase, test automatic linking of the records of the Library of Congress Name Authority File (LCNAF) and the German Personal Name Authority File by using matching algorithms and software developed by OCLC. The results are expected to form the basis of a "Virtual International Authority File". The project will then test the maintenance of the virtual authority file by employing the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)9 to harvest the metadata for new, updated, and deleted records. When using the "Virtual International Authority File" a cataloguer will be able to check the system to see whether the authority record he wants to establish already exists. The final phase of the project will test possibilities for displaying records in the preferred language and script of the end user. Currently, there are still some clear limitations associated with the ways in which authority records are used by memory institutions. One of the main problems has to do with limited access: generally only large institutions or those that are part of a library network have unlimited online access to permanently updated authority records. Smaller institutions outside these networks usually have to fall back on less efficient ways of obtaining authority data, or have no access at all. Cross-domain sharing of authority data between libraries, archives, museums and other memory institutions simply does not happen at present. Public users are, by and large, not even aware that such things as name authority records exist and are excluded from access to these information resources.
  13. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.01
    0.013049919 = product of:
      0.058724634 = sum of:
        0.038388252 = weight(_text_:readable in 1182) [ClassicSimilarity], result of:
          0.038388252 = score(doc=1182,freq=2.0), product of:
            0.2262076 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.036818076 = queryNorm
            0.16970363 = fieldWeight in 1182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.020336384 = weight(_text_:data in 1182) [ClassicSimilarity], result of:
          0.020336384 = score(doc=1182,freq=8.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.17468026 = fieldWeight in 1182, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
      0.22222222 = coord(2/9)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
    Although the Alexandria Digital Library provides far richer data than the TGN (5.9 vs. 1.3 million names), its added size lowers, rather than increases, the accuracy of most geographic name identification systems for historical documents: most of the extra 4.6 million names cover low frequency entities that rarely occur in any particular corpus. The TGN is sufficiently comprehensive to provide quite enough noise: we find place names that are used over and over (there are almost one hundred Washingtons) and semantically ambiguous (e.g., is Washington a person or a place?). Comprehensive knowledge sources emphasize recall but lower precision. We need data with which to determine which "Tribune" or "John Brown" a particular passage denotes. Secondly and paradoxically, our reference works may not be comprehensive enough. Human actors come and go over time. Organizations appear and vanish. Even places can change their names or vanish. The TGN does associate the obsolete name Siam with the nation of Thailand (tgn,1000142) - but also with towns named Siam in Iowa (tgn,2035651), Tennessee (tgn,2101519), and Ohio (tgn,2662003). Prussia appears but as a general region (tgn,7016786), with no indication when or if it was a sovereign nation. And if places do point to the same object over time, that object may have very different significance over time: in the foundational works of Western historiography, Herodotus reminds us that the great cities of the past may be small today, and the small cities of today great tomorrow (Hdt. 1.5), while Thucydides stresses that we cannot estimate the past significance of a place by its appearance today (Thuc. 1.10). In other words, we need to know the population figures for the various Washingtons in 1870 if we are analyzing documents from 1870. The foundations have been laid for reference works that provide machine actionable information about entities at particular times in history. The Alexandria Digital Library Gazetteer Content Standard8 represents a sophisticated framework with which to create such resources: places can be associated with temporal information about their foundation (e.g., Washington, DC, founded on 16 July 1790), changes in names for the same location (e.g., Saint Petersburg to Leningrad and back again), population figures at various times and similar historically contingent data. But if we have the software and the data structures, we do not yet have substantial amounts of historical content such as plentiful digital gazetteers, encyclopedias, lexica, grammars and other reference works to illustrate many periods and, even if we do, those resources may not be in a useful form: raw OCR output of a complex lexicon or gazetteer may have so many errors and have captured so little of the underlying structure that the digital resource is useless as a knowledge base. Put another way, human beings are still much better at reading and interpreting the contents of page images than machines. While people, places, and dates are probably the most important core entities, we will find a growing set of objects that we need to identify and track across collections, and each of these categories of objects will require its own knowledge sources. The following section enumerates and briefly describes some existing categories of documents that we need to mine for knowledge. This brief survey focuses on the format of print sources (e.g., highly structured textual "database" vs. unstructured text) to illustrate some of the challenges involved in converting our published knowledge into semantically annotated, machine actionable form.
  14. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.012827373 = product of:
      0.05772318 = sum of:
        0.040263984 = weight(_text_:data in 759) [ClassicSimilarity], result of:
          0.040263984 = score(doc=759,freq=4.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.34584928 = fieldWeight in 759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.017459193 = product of:
          0.034918386 = sum of:
            0.034918386 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.034918386 = score(doc=759,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  15. Miller, E.: ¬An introduction to the Resource Description Framework (1998) 0.01
    0.011943011 = product of:
      0.1074871 = sum of:
        0.1074871 = weight(_text_:readable in 1231) [ClassicSimilarity], result of:
          0.1074871 = score(doc=1231,freq=2.0), product of:
            0.2262076 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.036818076 = queryNorm
            0.47517014 = fieldWeight in 1231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1231)
      0.11111111 = coord(1/9)
    
    Abstract
    The Resource Description Framework (RDF) is an infrastructure that enables the encoding, exchange and reuse of structured metadata. RDF is an application of XML that imposes needed structural constraints to provide unambiguous methods of expressing semantics. RDF additionally provides a means for publishing both human-readable and machine-processable vocabularies designed to encourage the reuse and extension of metadata semantics among disparate information communities. The structural constraints RDF imposes to support the consistent encoding and exchange of standardized metadata provides for the interchangeability of separate packages of metadata defined by different resource description communities.
  16. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.011664795 = product of:
      0.052491575 = sum of:
        0.032538213 = weight(_text_:data in 251) [ClassicSimilarity], result of:
          0.032538213 = score(doc=251,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.2794884 = fieldWeight in 251, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=251)
        0.019953365 = product of:
          0.03990673 = sum of:
            0.03990673 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.03990673 = score(doc=251,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    22. 5.2021 12:43:05
    Source
    Open Password. 2021, Nr.947 vom 14.07.2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzMxOCwiNjczMmIwMzRlMDdmIiwwLDAsMjg4LDFd]
  17. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.01
    0.011664795 = product of:
      0.052491575 = sum of:
        0.032538213 = weight(_text_:data in 318) [ClassicSimilarity], result of:
          0.032538213 = score(doc=318,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.2794884 = fieldWeight in 318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.019953365 = product of:
          0.03990673 = sum of:
            0.03990673 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.03990673 = score(doc=318,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    22. 5.2021 12:43:05
    Source
    Open Password. 2021, Nr.925 vom 21.05.2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzI5NSwiZDdlZGY4MTk0NWJhIiwwLDAsMjY1LDFd]
  18. Bearman, D.; Miller, E.; Rust, G.; Trant, J.; Weibel, S.: ¬A common model to support interoperable metadata : progress report on reconciling metadata requirements from the Dublin Core and INDECS/DOI communities (1999) 0.01
    0.011369381 = product of:
      0.051162213 = sum of:
        0.03082583 = weight(_text_:bibliographic in 1249) [ClassicSimilarity], result of:
          0.03082583 = score(doc=1249,freq=2.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.21506234 = fieldWeight in 1249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1249)
        0.020336384 = weight(_text_:data in 1249) [ClassicSimilarity], result of:
          0.020336384 = score(doc=1249,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.17468026 = fieldWeight in 1249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1249)
      0.22222222 = coord(2/9)
    
    Abstract
    The Dublin Core metadata community and the INDECS/DOI community of authors, rights holders, and publishers are seeking common ground in the expression of metadata for information resources. Recent meetings at the 6th Dublin Core Workshop in Washington DC sketched out common models for semantics (informed by the requirements articulated in the IFLA Functional Requirements for the Bibliographic Record) and conventions for knowledge representation (based on the Resource Description Framework under development by the W3C). Further development of detailed requirements is planned by both communities in the coming months with the aim of fully representing the metadata needs of each. An open "Schema Harmonization" working group has been established to identify a common framework to support interoperability among these communities. The present document represents a starting point identifying historical developments and common requirements of these perspectives on metadata and charts a path for harmonizing their respective conceptual models. It is hoped that collaboration over the coming year will result in agreed semantic and syntactic conventions that will support a high degree of interoperability among these communities, ideally expressed in a single data model and using common, standard tools.
  19. Delsey, T.: ¬The Making of RDA (2016) 0.01
    0.010994892 = product of:
      0.04947701 = sum of:
        0.034511987 = weight(_text_:data in 2946) [ClassicSimilarity], result of:
          0.034511987 = score(doc=2946,freq=4.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.29644224 = fieldWeight in 2946, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2946)
        0.014965023 = product of:
          0.029930046 = sum of:
            0.029930046 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
              0.029930046 = score(doc=2946,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.23214069 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    The author revisits the development of RDA from its inception in 2005 through to its initial release in 2010. The development effort is set in the context of an evolving digital environment that was transforming both the production and dissemination of information resources and the technologies used to create, store, and access data describing those resources. The author examines the interplay between strategic commitments to align RDA with new conceptual models, emerging database structures, and metadata developments in allied communities, on the one hand, and compatibility with AACR2 legacy databases on the other. Aspects of the development effort examined include the structuring of RDA as a resource description language, organizing the new standard as a working tool, and refining guidelines and instructions for recording RDA data.
    Date
    17. 5.2016 19:22:40
  20. Jörs, B.: ¬Ein kleines Fach zwischen "Daten" und "Wissen" II : Anmerkungen zum (virtuellen) "16th International Symposium of Information Science" (ISI 2021", Regensburg) (2021) 0.01
    0.010598778 = product of:
      0.0476945 = sum of:
        0.035223648 = weight(_text_:data in 330) [ClassicSimilarity], result of:
          0.035223648 = score(doc=330,freq=6.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.30255508 = fieldWeight in 330, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=330)
        0.012470853 = product of:
          0.024941705 = sum of:
            0.024941705 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
              0.024941705 = score(doc=330,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.19345059 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=330)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Nur noch Informationsethik, Informationskompetenz und Information Assessment? Doch gerade die Abschottung von anderen Disziplinen verstärkt die Isolation des "kleinen Faches" Informationswissenschaft in der Scientific Community. So bleiben ihr als letzte "eigenständige" Forschungsrandgebiete nur die, die Wolf Rauch als Keynote Speaker bereits in seinem einführenden, historisch-genetischen Vortrag zur Lage der Informationswissenschaft auf der ISI 2021 benannt hat: "Wenn die universitäre Informationswissenschaft (zumindest in Europa) wohl kaum eine Chance hat, im Bereich der Entwicklung von Systemen und Anwendungen wieder an die Spitze der Entwicklung vorzustoßen, bleiben ihr doch Gebiete, in denen ihr Beitrag in der kommenden Entwicklungsphase dringend erforderlich sein wird: Informationsethik, Informationskompetenz, Information Assessment" (Wolf Rauch: Was aus der Informationswissenschaft geworden ist; in: Thomas Schmidt; Christian Wolff (Eds): Information between Data and Knowledge. Schriften zur Informationswissenschaft 74, Regensburg, 2021, Seiten 20-22 - siehe auch die Rezeption des Beitrages von Rauch durch Johannes Elia Panskus, Was aus der Informationswissenschaft geworden ist. Sie ist in der Realität angekommen, in: Open Password, 17. März 2021). Das ist alles? Ernüchternd.
    Content
    Vgl. auch Teil I: Open Password. 2021, Nr.946 vom 12. Juli 2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzMxNSwiM2MwMDJhZWIwZDQ0IiwwLDAsMjg1LDFd].
    Source
    Open Password. 2021, Nr.949 vom 19. Juli 2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzMxNywiMTFhMWNiMzkzYzUyIiwwLDAsMjg3LDFd]

Years

Languages

  • e 202
  • d 89
  • i 3
  • a 1
  • f 1
  • More… Less…