Search (192 results, page 1 of 10)

  • × theme_ss:"Information"
  1. Gödert, W.; Lepsky, K.: Informationelle Kompetenz : ein humanistischer Entwurf (2019) 0.21
    0.21083303 = product of:
      0.35138837 = sum of:
        0.08349173 = product of:
          0.25047517 = sum of:
            0.25047517 = weight(_text_:3a in 5955) [ClassicSimilarity], result of:
              0.25047517 = score(doc=5955,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.65568775 = fieldWeight in 5955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5955)
          0.33333334 = coord(1/3)
        0.25047517 = weight(_text_:2f in 5955) [ClassicSimilarity], result of:
          0.25047517 = score(doc=5955,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.65568775 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
        0.01742145 = product of:
          0.0348429 = sum of:
            0.0348429 = weight(_text_:data in 5955) [ClassicSimilarity], result of:
              0.0348429 = score(doc=5955,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24455236 = fieldWeight in 5955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5955)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Footnote
    Rez. in: Philosophisch-ethische Rezensionen vom 09.11.2019 (Jürgen Czogalla), Unter: https://philosophisch-ethische-rezensionen.de/rezension/Goedert1.html. In: B.I.T. online 23(2020) H.3, S.345-347 (W. Sühl-Strohmenger) [Unter: https%3A%2F%2Fwww.b-i-t-online.de%2Fheft%2F2020-03-rezensionen.pdf&usg=AOvVaw0iY3f_zNcvEjeZ6inHVnOK]. In: Open Password Nr. 805 vom 14.08.2020 (H.-C. Hobohm) [Unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzE0MywiOGI3NjZkZmNkZjQ1IiwwLDAsMTMxLDFd].
  2. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.10
    0.09541912 = product of:
      0.23854779 = sum of:
        0.059636947 = product of:
          0.17891084 = sum of:
            0.17891084 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.17891084 = score(doc=5895,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.17891084 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.17891084 = score(doc=5895,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.4 = coord(2/5)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  3. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.10
    0.09541912 = product of:
      0.23854779 = sum of:
        0.059636947 = product of:
          0.17891084 = sum of:
            0.17891084 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.17891084 = score(doc=76,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.33333334 = coord(1/3)
        0.17891084 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.17891084 = score(doc=76,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
      0.4 = coord(2/5)
    
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  4. Yu, L.; Fan, Z.; Li, A.: ¬A hierarchical typology of scholarly information units : based on a deduction-verification study (2020) 0.03
    0.033102494 = product of:
      0.08275623 = sum of:
        0.030179864 = weight(_text_:bibliographic in 5655) [ClassicSimilarity], result of:
          0.030179864 = score(doc=5655,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.17204987 = fieldWeight in 5655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=5655)
        0.052576363 = sum of:
          0.028157318 = weight(_text_:data in 5655) [ClassicSimilarity], result of:
            0.028157318 = score(doc=5655,freq=4.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.19762816 = fieldWeight in 5655, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.03125 = fieldNorm(doc=5655)
          0.024419045 = weight(_text_:22 in 5655) [ClassicSimilarity], result of:
            0.024419045 = score(doc=5655,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.15476047 = fieldWeight in 5655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5655)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The purpose of this paper is to lay a theoretical foundation for identifying operational information units for library and information professional activities in the context of scholarly communication. Design/methodology/approach The study adopts a deduction-verification approach to formulate a typology of units for scholarly information. It first deduces possible units from an existing conceptualization of information, which defines information as the combined product of data and meaning, and then tests the usefulness of these units via two empirical investigations, one with a group of scholarly papers and the other with a sample of scholarly information users. Findings The results show that, on defining an information unit as a piece of information that is complete in both data and meaning, to such an extent that it remains meaningful to its target audience when retrieved and displayed independently in a database, it is then possible to formulate a hierarchical typology of units for scholarly information. The typology proposed in this study consists of three levels, which in turn, consists of 1, 5 and 44 units, respectively. Research limitations/implications The result of this study has theoretical implications on both the philosophical and conceptual levels: on the philosophical level, it hinges on, and reinforces the objective view of information; on the conceptual level, it challenges the conceptualization of work by IFLA's Functional Requirements for Bibliographic Records and Library Reference Model but endorses that by Library of Congress's BIBFRAME 2.0 model. Practical implications It calls for reconsideration of existing operational units in a variety of library and information activities. Originality/value The study strengthens the conceptual foundation of operational information units and brings to light the primacy of "one work" as an information unit and the possibility for it to be supplemented by smaller units.
    Date
    14. 1.2020 11:15:22
  5. Houston, R.D.; Harmon, E.G.: Re-envisioning the information concept : systematic definitions (2002) 0.02
    0.024882102 = product of:
      0.12441051 = sum of:
        0.12441051 = sum of:
          0.03982046 = weight(_text_:data in 136) [ClassicSimilarity], result of:
            0.03982046 = score(doc=136,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2794884 = fieldWeight in 136, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0625 = fieldNorm(doc=136)
          0.084590055 = weight(_text_:22 in 136) [ClassicSimilarity], result of:
            0.084590055 = score(doc=136,freq=6.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.536106 = fieldWeight in 136, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=136)
      0.2 = coord(1/5)
    
    Abstract
    This paper suggests a framework and systematic definitions for 6 words commonly used in dthe field of information science: data, information, knowledge, wisdom, inspiration, and intelligence. We intend these definitions to lead to a quantification of information science, a quantification that will enable their measurement, manipulastion, and prediction.
    Date
    22. 2.2007 18:56:23
    22. 2.2007 19:22:13
  6. Badia, A.: Data, information, knowledge : an information science analysis (2014) 0.02
    0.024128888 = product of:
      0.120644435 = sum of:
        0.120644435 = sum of:
          0.0779111 = weight(_text_:data in 1296) [ClassicSimilarity], result of:
            0.0779111 = score(doc=1296,freq=10.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.5468357 = fieldWeight in 1296, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1296)
          0.04273333 = weight(_text_:22 in 1296) [ClassicSimilarity], result of:
            0.04273333 = score(doc=1296,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.2708308 = fieldWeight in 1296, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1296)
      0.2 = coord(1/5)
    
    Abstract
    I analyze the text of an article that appeared in this journal in 2007 that published the results of a questionnaire in which a number of experts were asked to define the concepts of data, information, and knowledge. I apply standard information retrieval techniques to build a list of the most frequent terms in each set of definitions. I then apply information extraction techniques to analyze how the top terms are used in the definitions. As a result, I draw data-driven conclusions about the aggregate opinion of the experts. I contrast this with the original analysis of the data to provide readers with an alternative viewpoint on what the data tell us.
    Date
    16. 6.2014 19:22:57
  7. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.02
    0.023769464 = product of:
      0.05942366 = sum of:
        0.046979766 = weight(_text_:readable in 1182) [ClassicSimilarity], result of:
          0.046979766 = score(doc=1182,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.16970363 = fieldWeight in 1182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 1182) [ClassicSimilarity], result of:
              0.024887787 = score(doc=1182,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 1182, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
    Although the Alexandria Digital Library provides far richer data than the TGN (5.9 vs. 1.3 million names), its added size lowers, rather than increases, the accuracy of most geographic name identification systems for historical documents: most of the extra 4.6 million names cover low frequency entities that rarely occur in any particular corpus. The TGN is sufficiently comprehensive to provide quite enough noise: we find place names that are used over and over (there are almost one hundred Washingtons) and semantically ambiguous (e.g., is Washington a person or a place?). Comprehensive knowledge sources emphasize recall but lower precision. We need data with which to determine which "Tribune" or "John Brown" a particular passage denotes. Secondly and paradoxically, our reference works may not be comprehensive enough. Human actors come and go over time. Organizations appear and vanish. Even places can change their names or vanish. The TGN does associate the obsolete name Siam with the nation of Thailand (tgn,1000142) - but also with towns named Siam in Iowa (tgn,2035651), Tennessee (tgn,2101519), and Ohio (tgn,2662003). Prussia appears but as a general region (tgn,7016786), with no indication when or if it was a sovereign nation. And if places do point to the same object over time, that object may have very different significance over time: in the foundational works of Western historiography, Herodotus reminds us that the great cities of the past may be small today, and the small cities of today great tomorrow (Hdt. 1.5), while Thucydides stresses that we cannot estimate the past significance of a place by its appearance today (Thuc. 1.10). In other words, we need to know the population figures for the various Washingtons in 1870 if we are analyzing documents from 1870. The foundations have been laid for reference works that provide machine actionable information about entities at particular times in history. The Alexandria Digital Library Gazetteer Content Standard8 represents a sophisticated framework with which to create such resources: places can be associated with temporal information about their foundation (e.g., Washington, DC, founded on 16 July 1790), changes in names for the same location (e.g., Saint Petersburg to Leningrad and back again), population figures at various times and similar historically contingent data. But if we have the software and the data structures, we do not yet have substantial amounts of historical content such as plentiful digital gazetteers, encyclopedias, lexica, grammars and other reference works to illustrate many periods and, even if we do, those resources may not be in a useful form: raw OCR output of a complex lexicon or gazetteer may have so many errors and have captured so little of the underlying structure that the digital resource is useless as a knowledge base. Put another way, human beings are still much better at reading and interpreting the contents of page images than machines. While people, places, and dates are probably the most important core entities, we will find a growing set of objects that we need to identify and track across collections, and each of these categories of objects will require its own knowledge sources. The following section enumerates and briefly describes some existing categories of documents that we need to mine for knowledge. This brief survey focuses on the format of print sources (e.g., highly structured textual "database" vs. unstructured text) to illustrate some of the challenges involved in converting our published knowledge into semantically annotated, machine actionable form.
  8. Infield, N.: Capitalising on knowledge : if knowledge is power, why don't librarians rule the world? (1997) 0.02
    0.01773171 = product of:
      0.08865855 = sum of:
        0.08865855 = sum of:
          0.03982046 = weight(_text_:data in 668) [ClassicSimilarity], result of:
            0.03982046 = score(doc=668,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2794884 = fieldWeight in 668, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0625 = fieldNorm(doc=668)
          0.04883809 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
            0.04883809 = score(doc=668,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.30952093 = fieldWeight in 668, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=668)
      0.2 = coord(1/5)
    
    Abstract
    While knowledge management is seen to be the biggest thing to hit the information profession since the Internet, the concept is surrounded by confusion. Traces the progress of knowledge on the information continuum which extends from data to informed decision. The reason for which knowledge management has suddenly become inluential is that its principal proponents now are not information professionals but management consultants seeking to retain their intellectual capital. Explains the reasons for this, the practical meaning of knowledge management and what information professionals should be doing to take advantage of the vogue
    Source
    Information world review. 1997, no.130, S.22
  9. Taylor, A.G.: ¬The information universe : will we have chaos of control? (1994) 0.02
    0.01707231 = product of:
      0.08536155 = sum of:
        0.08536155 = weight(_text_:bibliographic in 1644) [ClassicSimilarity], result of:
          0.08536155 = score(doc=1644,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.4866305 = fieldWeight in 1644, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=1644)
      0.2 = coord(1/5)
    
    Abstract
    Presents evidence to suggest that the online world needs the bibliographic skills of librarians but that the term bibliographic control is likely to be associated specifically with libraries and liable to misinterpretation. Suggests that it may be time to start talking about information organization which may be described as having the following 4 aspects: making new information bearing entities known; acquiring such entities at certain points of accumulation; providing name, title and subject access to the entities; and providing for the physical location of copies. Urges librarians rapidly to adapt their skills to this increasing need for information organization
  10. Meadows, J.: Understanding information (2001) 0.02
    0.015515247 = product of:
      0.077576235 = sum of:
        0.077576235 = sum of:
          0.0348429 = weight(_text_:data in 3067) [ClassicSimilarity], result of:
            0.0348429 = score(doc=3067,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.24455236 = fieldWeight in 3067, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3067)
          0.04273333 = weight(_text_:22 in 3067) [ClassicSimilarity], result of:
            0.04273333 = score(doc=3067,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.2708308 = fieldWeight in 3067, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3067)
      0.2 = coord(1/5)
    
    Abstract
    Die moderne Gesellschaft leidet an Reizüberflutung durch Fernsehen, Internet, Zeitschriften aller Art. Jack Meadows, Professor für Bibliotheks- und Informationswissenschaft setzt sich mit Definitionen zu Begriffen wie 'Data', 'Information', 'Communication' oder 'Knowledge' auseinander, die für uns alläglich geworden sind. wie verarbeiten wir den Fluss von wichtigen und unwichtigen Informationen, der täglich auf uns einströmt? Welche 'Daten' sind es für uns Wert, gespeichert zu werden, welche vergessen wir nach kurzer Zeit? Wann wird aus Information Wissen oder gar Weisheit? Das Buch ist eine grundlegende Einführung in das weitläufige Thema Information und Wissensmanagement
    Date
    15. 6.2002 19:22:01
  11. Westbrook, L.: Information myths and intimate partner violence : sources, contexts, and consequences (2009) 0.02
    0.015515247 = product of:
      0.077576235 = sum of:
        0.077576235 = sum of:
          0.0348429 = weight(_text_:data in 2790) [ClassicSimilarity], result of:
            0.0348429 = score(doc=2790,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.24455236 = fieldWeight in 2790, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2790)
          0.04273333 = weight(_text_:22 in 2790) [ClassicSimilarity], result of:
            0.04273333 = score(doc=2790,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.2708308 = fieldWeight in 2790, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2790)
      0.2 = coord(1/5)
    
    Abstract
    Survivors of intimate partner violence face more than information gaps; many face powerful barriers in the form of information myths. Triangulating data from in-depth interviews and community bulletin board postings, this study incorporates insights from survivors, police, and shelter staff to begin mapping the information landscape through which survivors move. An unanticipated feature of that landscape is a set of 28 compelling information myths that prevent some survivors from making effective use of the social, legal, economic, and support resources available to them. This analysis of the sources, contexts, and consequences of these information myths is the first step in devising strategies to counter their ill effects.
    Date
    22. 3.2009 19:16:44
  12. Cooke, N.J.: Varieties of knowledge elicitation techniques (1994) 0.02
    0.015089932 = product of:
      0.07544966 = sum of:
        0.07544966 = weight(_text_:bibliographic in 2245) [ClassicSimilarity], result of:
          0.07544966 = score(doc=2245,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.43012467 = fieldWeight in 2245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.078125 = fieldNorm(doc=2245)
      0.2 = coord(1/5)
    
    Abstract
    Information on knowledge elicitation methods is widely scattered across the fields of psychology, business management, education, counselling, cognitive science, linguistics, philosophy, knowledge engineering and anthropology. Identifies knowledge elicitation techniques and the associated bibliographic information. Organizes the techniques into categories on the basis of methodological similarity. Summarizes for each category of techniques strengths, weaknesses and recommends applications
  13. fwt: Wie das Gehirn Bilder 'liest' (1999) 0.01
    0.013813498 = product of:
      0.06906749 = sum of:
        0.06906749 = product of:
          0.13813499 = sum of:
            0.13813499 = weight(_text_:22 in 4042) [ClassicSimilarity], result of:
              0.13813499 = score(doc=4042,freq=4.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.8754574 = fieldWeight in 4042, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4042)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 7.2000 19:01:22
  14. Allen, B.L.: Visualization and cognitve abilities (1998) 0.01
    0.0132987825 = product of:
      0.06649391 = sum of:
        0.06649391 = sum of:
          0.029865343 = weight(_text_:data in 2340) [ClassicSimilarity], result of:
            0.029865343 = score(doc=2340,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2096163 = fieldWeight in 2340, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=2340)
          0.036628567 = weight(_text_:22 in 2340) [ClassicSimilarity], result of:
            0.036628567 = score(doc=2340,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 2340, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2340)
      0.2 = coord(1/5)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  15. Malsburg, C. von der: Concerning the neuronal code (2018) 0.01
    0.0132987825 = product of:
      0.06649391 = sum of:
        0.06649391 = sum of:
          0.029865343 = weight(_text_:data in 73) [ClassicSimilarity], result of:
            0.029865343 = score(doc=73,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2096163 = fieldWeight in 73, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=73)
          0.036628567 = weight(_text_:22 in 73) [ClassicSimilarity], result of:
            0.036628567 = score(doc=73,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 73, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=73)
      0.2 = coord(1/5)
    
    Abstract
    The central problem with understanding brain and mind is the neural code issue: understanding the matter of our brain as basis for the phenomena of our mind. The richness with which our mind represents our environment, the parsimony of genetic data, the tremendous efficiency with which the brain learns from scant sensory input and the creativity with which our mind constructs mental worlds all speak in favor of mind as an emergent phenomenon. This raises the further issue of how the neural code supports these processes of organization. The central point of this communication is that the neural code has the form of structured net fragments that are formed by network self-organization, activate and de-activate on the functional time scale, and spontaneously combine to form larger nets with the same basic structure.
    Date
    27.12.2020 16:56:22
  16. Gödert, W.: Information as a cognitive construction : a communication-theoretic model and consequences for information systems (1996) 0.01
    0.010562953 = product of:
      0.052814763 = sum of:
        0.052814763 = weight(_text_:bibliographic in 6032) [ClassicSimilarity], result of:
          0.052814763 = score(doc=6032,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.30108726 = fieldWeight in 6032, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6032)
      0.2 = coord(1/5)
    
    Abstract
    In this paper a model for understanding the concept of information is presented and how the processes of externalization and perception of information by human beings could be understood. This model is different from the standard information theoretic model. It combines the understanding of cognitive information processing as an act of information generation from sense impressions with communication theoretic considerations. This approach can be of value for any system that is regarded as a knowledge system with an in-built ordering structure. As an application some consequences will be drawn for the design of information systems which claims to handle information itself (e.g. multimedia information systems) instead of giving references to bibliographic entities
  17. dpa: Struktur des Denkorgans wird bald entschlüsselt sein (2000) 0.01
    0.0103601245 = product of:
      0.05180062 = sum of:
        0.05180062 = product of:
          0.10360124 = sum of:
            0.10360124 = weight(_text_:22 in 3952) [ClassicSimilarity], result of:
              0.10360124 = score(doc=3952,freq=4.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.6565931 = fieldWeight in 3952, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3952)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    17. 7.1996 9:33:22
    22. 7.2000 19:05:41
  18. Stock, W.G.: Wissenschaftsinformatik : Fundierung, Gegenstand und Methoden (1980) 0.01
    0.009767618 = product of:
      0.04883809 = sum of:
        0.04883809 = product of:
          0.09767618 = sum of:
            0.09767618 = weight(_text_:22 in 2808) [ClassicSimilarity], result of:
              0.09767618 = score(doc=2808,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.61904186 = fieldWeight in 2808, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=2808)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Ratio. 22(1980), S.155-164
  19. Fallis, D.: Social epistemology and information science (2006) 0.01
    0.009767618 = product of:
      0.04883809 = sum of:
        0.04883809 = product of:
          0.09767618 = sum of:
            0.09767618 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.09767618 = score(doc=4368,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    13. 7.2008 19:22:28
  20. Black, A.; Schiller, D.: Systems of information : the long view (2014) 0.01
    0.0090539595 = product of:
      0.045269795 = sum of:
        0.045269795 = weight(_text_:bibliographic in 5546) [ClassicSimilarity], result of:
          0.045269795 = score(doc=5546,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.2580748 = fieldWeight in 5546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=5546)
      0.2 = coord(1/5)
    
    Abstract
    In response to the perceived (by some) onset of an information society, historians have begun to study its roots and antecedents. The past is replete with the rise, fall, and transformation of systems of information, which are not to be confused with the narrower computer-mediated world of information systems. The history of systems of information-which for digestibility can be labeled information history-lacks neither scale nor scope. Systems of information have played a critical role in the transition to, and subsequent development of, capitalism; the growth of the state, especially the modern, nation-state; the rise of modernity, science, and the public sphere; imperialism; and geopolitics. In the context of these epochal shifts and episodes in human thinking and social organization, this essay presents a critical bibliographic survey of histories-outside the well-trodden paths of library and information-science history-that have foregrounded, or made reference to, a wide variety of systems of information.

Years

Languages

  • e 120
  • d 68
  • de 1
  • f 1
  • fi 1
  • More… Less…

Types

  • a 157
  • m 31
  • el 11
  • s 9
  • r 1
  • More… Less…

Subjects

Classifications