Search (30956 results, page 1548 of 1548)

  1. Baker, T.: Languages for Dublin Core (1998) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 1257) [ClassicSimilarity], result of:
          0.005614545 = score(doc=1257,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 1257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1257)
      0.2 = coord(1/5)
    
    Abstract
    Over the past three years, the Dublin Core Metadata Initiative has achieved a broad international consensus on the semantics of a simple element set for describing electronic resources. Since the first workshop in March 1995, which was reported in the very first issue of D-Lib Magazine, Dublin Core has been the topic of perhaps a dozen articles here. Originally intended to be simple and intuitive enough for authors to tag Web pages without special training, Dublin Core is being adapted now for more specialized uses, from government information and legal deposit to museum informatics and electronic commerce. To meet such specialized requirements, Dublin Core can be customized with additional elements or qualifiers. However, these refinements can compromise interoperability across applications. There are tradeoffs between using specific terms that precisely meet local needs versus general terms that are understood more widely. We can better understand this inevitable tension between simplicity and complexity if we recognize that metadata is a form of human language. With Dublin Core, as with a natural language, people are inclined to stretch definitions, make general terms more specific, specific terms more general, misunderstand intended meanings, and coin new terms. One goal of this paper, therefore, will be to examine the experience of some related ways to seek semantic interoperability through simplicity: planned languages, interlingua constructs, and pidgins. The problem of semantic interoperability is compounded when we consider Dublin Core in translation. All of the workshops, documents, mailing lists, user guides, and working group outputs of the Dublin Core Initiative have been in English. But in many countries and for many applications, people need a metadata standard in their own language. In principle, the broad elements of Dublin Core can be defined equally well in Bulgarian or Hindi. Since Dublin Core is a controlled standard, however, any parallel definitions need to be kept in sync as the standard evolves. Another goal of the paper, then, will be to define the conceptual and organizational problem of maintaining a metadata standard in multiple languages. In addition to a name and definition, which are meant for human consumption, each Dublin Core element has a label, or indexing token, meant for harvesting by search engines. For practical reasons, these machine-readable tokens are English-looking strings such as Creator and Subject (just as HTML tags are called HEAD, BODY, or TITLE). These tokens, which are shared by Dublin Cores in every language, ensure that metadata fields created in any particular language are indexed together across repositories. As symbols of underlying universal semantics, these tokens form the basis of semantic interoperability among the multiple Dublin Cores. As long as we limit ourselves to sharing these indexing tokens among exact translations of a simple set of fifteen broad elements, the definitions of which fit easily onto two pages, the problem of Dublin Core in multiple languages is straightforward. But nothing having to do with human language is ever so simple. Just as speakers of various languages must learn the language of Dublin Core in their own tongues, we must find the right words to talk about a metadata language that is expressable in many discipline-specific jargons and natural languages and that inevitably will evolve and change over time.
  2. Nagy T., I.: Detecting multiword expressions and named entities in natural language texts (2014) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 1536) [ClassicSimilarity], result of:
          0.005614545 = score(doc=1536,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 1536, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1536)
      0.2 = coord(1/5)
    
    Abstract
    Multiword expressions (MWEs) are lexical items that can be decomposed into single words and display lexical, syntactic, semantic, pragmatic and/or statistical idiosyncrasy (Sag et al., 2002; Kim, 2008; Calzolari et al., 2002). The proper treatment of multiword expressions such as rock 'n' roll and make a decision is essential for many natural language processing (NLP) applications like information extraction and retrieval, terminology extraction and machine translation, and it is important to identify multiword expressions in context. For example, in machine translation we must know that MWEs form one semantic unit, hence their parts should not be translated separately. For this, multiword expressions should be identified first in the text to be translated. The chief aim of this thesis is to develop machine learning-based approaches for the automatic detection of different types of multiword expressions in English and Hungarian natural language texts. In our investigations, we pay attention to the characteristics of different types of multiword expressions such as nominal compounds, multiword named entities and light verb constructions, and we apply novel methods to identify MWEs in raw texts. In the thesis it will be demonstrated that nominal compounds and multiword amed entities may require a similar approach for their automatic detection as they behave in the same way from a linguistic point of view. Furthermore, it will be shown that the automatic detection of light verb constructions can be carried out using two effective machine learning-based approaches.
  3. Chen, H.; Baptista Nunes, J.M.; Ragsdell, G.; An, X.: Somatic and cultural knowledge : drivers of a habitus-driven model of tacit knowledge acquisition (2019) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 5460) [ClassicSimilarity], result of:
          0.005614545 = score(doc=5460,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 5460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5460)
      0.2 = coord(1/5)
    
    Abstract
    Findings The findings of this research suggest that individual learning and development are deemed to be the fundamental feature for professional success and survival in the continuously changing environment of the SW industry today. However, individual learning was described by the participants as much more than a mere individual process. It involves a collective and participatory effort within the organization and the sector as a whole, and a KS process that transcends organizational, cultural and national borders. Individuals in particular are mostly motivated by the pressing need to face and adapt to the dynamic and changeable environments of today's digital society that is led by the sector. Software practitioners are continuously in need of learning, refreshing and accumulating tacit knowledge, partly because it is required by their companies, but also due to a sound awareness of continuous technical and technological changes that seem only to increase with the advances of information technology. This led to a clear theoretical understanding that the continuous change that faces the sector has led to individual acquisition of culture and somatic knowledge that in turn lay the foundation for not only the awareness of the need for continuous individual professional development but also for the creation of habitus related to KS and continuous learning. Originality/value The study reported in this paper shows that there is a theoretical link between the existence of conducive organizational and sector-wide somatic and cultural knowledge, and the success of KS practices that lead to individual learning and development. Therefore, the theory proposed suggests that somatic and cultural knowledge are crucial drivers for the creation of habitus of individual tacit knowledge acquisition. The paper further proposes a habitus-driven individual development (HDID) Theoretical Model that can be of use to both academics and practitioners interested in fostering and developing processes of KS and individual development in knowledge-intensive organizations.
  4. ¬The library's guide to graphic novels (2020) 0.00
    0.001122909 = product of:
      0.005614545 = sum of:
        0.005614545 = weight(_text_:information in 717) [ClassicSimilarity], result of:
          0.005614545 = score(doc=717,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.06788416 = fieldWeight in 717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=717)
      0.2 = coord(1/5)
    
    Abstract
    The circ stats say it all: graphic novels' popularity among library users keeps growing, with more being published (and acquired by libraries) each year. The unique challenges of developing and managing a graphics novels collection have led the Association of Library Collections and Technical Services (ALCTS) to craft this guide, presented under the expert supervision of editor Ballestro, who has worked with comics for more than 35 years. Examining the ever-changing ways that graphic novels are created, packaged, marketed, and released, this resource gathers a range of voices from the field to explore such topics as: a cultural history of comics and graphic novels from their World War II origins to today, providing a solid grounding for newbies and fresh insights for all; catching up on the Big Two's reboots: Marvel's 10 and DC's 4; five questions to ask when evaluating nonfiction graphic novels and 30 picks for a core collection; key publishers and cartoonists to consider when adding international titles; developing a collection that supports curriculum and faculty outreach to ensure wide usage, with catalogers' tips for organizing your collection and improving discovery; real-world examples of how libraries treat graphic novels, such as an in-depth profile of the development of Penn Library's Manga collection; how to integrate the emerging field of graphic medicine into the collection; and specialized resources like The Cartoonists of Color and Queer Cartoonists databases, the open access scholarly journal Comic Grid, and the No Flying, No Tights website. Packed with expert guidance and useful information, this guide will assist technical services staff, catalogers, and acquisition and collection management librarians.
  5. Waesche, N.M.: Internet entrepreneurship in Europe : venture failure and the timing of telecommunications reform (2003) 0.00
    0.0011113918 = product of:
      0.0055569587 = sum of:
        0.0055569587 = weight(_text_:information in 3566) [ClassicSimilarity], result of:
          0.0055569587 = score(doc=3566,freq=6.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.0671879 = fieldWeight in 3566, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=3566)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: JASIST 55(2004) no.2, S.181-182 (J. Scholl): "The book is based an a doctoral thesis titled "Global opportunity and national political economy: The development of internet ventures in Germany," which was supervised by Razeen Sally and accepted at the International Relations Department of the London School of Economics & Political Science, UK, in 2002. Its primary audience, although it is certainly of interest to policy makers, trade press journalists, and industry practitioners, is the academic community, and, in particular, (international) policy, business, business history, information technology, and information science scholars. The book's self-stated purpose is to explain "why Europe, despite initiating a tremendous amount of change ... failed to produce independent internet ventures of note" (p. 1) in contrast to the United States, where Internet start-ups such as Amazon.com, eBay, E*trade, and Yahoo managed to survive the notorious dot.com shakeout of 200I-2002. A few pages down, the objective is restated as "to explore the hypothesis of a global opportunity for technology innovation delivered via the internet and to explain Europe's entrepreneurial response" (p. 4). As a proxy case for Europe, the study provides a broad account of the changing legal and socioeconomic setting during the phase of early Internet adoption and development in Germany throughout the 1990s. The author highlights and details various facets of the entrepreneurial opportunity and compares the German case in some detail to corresponding developments in Sweden. Waesche concludes that starting an Internet business in Germany during that particular period of time was a "wrong country, wrong time" (p. I86) proposition.
    Waesche sparsely Sketches out a theoretical framework for his study combining "network thinking," which he Claims to stand in the Schumpeterian research tradition, with classical institutional theory a la Max Weber. It is not clear, though, how this theory has guided his empirical research. No detailed hypotheses are presented, which would further clarify what was studied. Beyond the rudimentary framework, the author presents a concept of "refraction" denoting the "distorting effect national institutions have an a global innovation opportunity" (p. 17). Again, no hypotheses or measures for this concept are developed. No indication is given about which specific academic contribution was intended to be made and which particular gap of knowledge was attempted to be filled. Waesche's book would have greatly benefited from a more sharply posed and more detailed set of research questions. Instead we leam many details about the German situation in general and about the perceptions of individual players, particularly managerial personnel, in entrepreneurial Internet businesses in a specific Situation within a relatively short period of time. While many of those details are interesting in their own right, the reader is left wondering what the study's novelty is, what it specifically uncovered, what the frame of reference was, and what was finally learned. Contrary to its Claim and unlike a Chandlerian treatment of business history, the study does not explain, it rather just deseribes a particular historical situation. Consequently, the author refrains from presenting any new theory or prescriptive framework in his concluding remarks, but rather briefly revisits and summarizes the presening chapters. The study's empirical basis consists of two surveys with Sample sizes of 123 and 30 as well as a total of 68 interviews. The surveys and interviews were mostly completed between July of 1997 and November of 1999. Although descriptive statistics and detailed demographic information is provided in the appendix, the questionnaires and interview protocols are not included, making it difficult to follow the research undertaking. In summary, while undeniably a number of interesting and illustrative details regarding early Internet entrepreneurship in Germany are accounted for in Waesche's book, it would have provided a much stronger academic contribution had it developed a sound theory upfront and then empirically tested that theory. Alternatively the author could have singled out certain gaps in existing theory, and then attempted to fill those gaps by providing empirical evidence. In either case, he would have almost inevitably arrived at new insights directing to further study."
  6. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.00
    9.624934E-4 = product of:
      0.004812467 = sum of:
        0.004812467 = weight(_text_:information in 3651) [ClassicSimilarity], result of:
          0.004812467 = score(doc=3651,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.058186423 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
      0.2 = coord(1/5)
    
    Abstract
    The fan of past documents may be seen across time as a philosophical "wake," translated documents as a sideways relationship and future documents as another fan spreading forward from a given document (p. 365). The "overlap of reading histories can be used to detect common interests among readers," (p. 365) and readers may be classified accordingly. Finally, Fairthorne rejects the notion of a "general" classification, which he regards as a mirage, to be replaced by a citation-type network to identify classes. An interesting feature of his work lies in his linkage between old and new documents via a bibliographic method-citations, authors' names, imprints, style, and vocabulary - rather than topical (subject) terms. This is an indirect method of creating classes. The subject (aboutness) is conceived as a finite, common sharing of knowledge over time (past, present, and future) as opposed to the more common hierarchy of topics in an infinite schema assumed to be universally useful. Fairthorne, a mathematician by training, is a prolific writer an the foundations of classification and information. His professional career includes work with the Royal Engineers Chemical Warfare Section and the Royal Aircraft Establishment (RAE). He was the founder of the Computing Unit which became the RAE Mathematics Department.
  7. Brand, A.: CrossRef turns one (2001) 0.00
    9.624934E-4 = product of:
      0.004812467 = sum of:
        0.004812467 = weight(_text_:information in 1222) [ClassicSimilarity], result of:
          0.004812467 = score(doc=1222,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.058186423 = fieldWeight in 1222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1222)
      0.2 = coord(1/5)
    
    Abstract
    CrossRef, the only full-blown application of the Digital Object Identifier (DOI®) System to date, is now a little over a year old. What started as a cooperative effort among publishers and technologists to prototype DOI-based linking of citations in e-journals evolved into an independent, non-profit enterprise in early 2000. We have made considerable headway during our first year, but there is still much to be done. When CrossRef went live with its collaborative linking service last June, it had enabled reference links in roughly 1,100 journals from a member base of 33 publishers, using a functional prototype system. The DOI-X prototype was described in an article published in D-Lib Magazine in February of 2000. On the occasion of CrossRef's first birthday as a live service, this article provides a non-technical overview of our progress to date and the major hurdles ahead. The electronic medium enriches the research literature arena for all players -- researchers, librarians, and publishers -- in numerous ways. Information has been made easier to discover, to share, and to sell. To take a simple example, the aggregation of book metadata by electronic booksellers was a huge boon to scholars seeking out obscure backlist titles, or discovering books they would never otherwise have known to exist. It was equally a boon for the publishers of those books, who saw an unprecedented surge in sales of backlist titles with the advent of centralized electronic bookselling. In the serials sphere, even in spite of price increases and the turmoil surrounding site licenses for some prime electronic content, libraries overall are now able to offer more content to more of their patrons. Yet undoubtedly, the key enrichment for academics and others navigating a scholarly corpus is linking, and in particular the linking that takes the reader out of one document and into another in the matter of a click or two. Since references are how authors make explicit the links between their work and precedent scholarship, what could be more fundamental to the reader than making those links immediately actionable? That said, automated linking is only really useful from a research perspective if it works across publications and across publishers. Not only do academics think about their own writings and those of their colleagues in terms of "author, title, rough date" -- the name of the journal itself is usually not high on the list of crucial identifying features -- but they are oblivious as to the identity of the publishers of all but their very favorite books and journals.
  8. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.00
    9.624934E-4 = product of:
      0.004812467 = sum of:
        0.004812467 = weight(_text_:information in 3391) [ClassicSimilarity], result of:
          0.004812467 = score(doc=3391,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.058186423 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3391)
      0.2 = coord(1/5)
    
    Content
    Short Papers * A Database Backend for OWL, Jörg Henss, Joachim Kleb and Stephan Grimm. * Unifying SysML and OWL, Henson Graves. * The OWLlink Protocol, Thorsten Liebig, Marko Luther and Olaf Noppens. * A Reasoning Broker Framework for OWL, Juergen Bock, Tuvshintur Tserendorj, Yongchun Xu, Jens Wissmann and Stephan Grimm. * Change Representation For OWL 2 Ontologies, Raul Palma, Peter Haase, Oscar Corcho and Asunción Gómez-Pérez. * Practical Aspects of Query Rewriting for OWL 2, Héctor Pérez-Urbina, Ian Horrocks and Boris Motik. * CSage: Use of a Configurable Semantically Attributed Graph Editor as Framework for Editing and Visualization, Lawrence Levin. * A Conformance Test Suite for the OWL 2 RL/RDF Rules Language and the OWL 2 RDF-Based Semantics, Michael Schneider and Kai Mainzer. * Improving the Data Quality of Relational Databases using OBDA and OWL 2 QL, Olivier Cure. * Temporal Classes and OWL, Natalya Keberle. * Using Ontologies for Medical Image Retrieval - An Experiment, Jasmin Opitz, Bijan Parsia and Ulrike Sattler. * Task Representation and Retrieval in an Ontology-Guided Modelling System, Yuan Ren, Jens Lemcke, Andreas Friesen, Tirdad Rahmani, Srdjan Zivkovic, Boris Gregorcic, Andreas Bartho, Yuting Zhao and Jeff Z. Pan. * A platform for reasoning with OWL-EL knowledge bases in a Peer-to-Peer environment, Alexander De Leon and Michel Dumontier. * Axiomé: a Tool for the Elicitation and Management of SWRL Rules, Saeed Hassanpour, Martin O'Connor and Amar Das. * SQWRL: A Query Language for OWL, Martin O'Connor and Amar Das. * Classifying ELH Ontologies In SQL Databases, Vincent Delaitre and Yevgeny Kazakov. * A Semantic Web Approach to Represent and Retrieve Information in a Corporate Memory, Ana B. Rios-Alvarado, R. Carolina Medina-Ramirez and Ricardo Marcelin-Jimenez. * Towards a Graphical Notation for OWL 2, Elisa Kendall, Roy Bell, Roger Burkhart, Mark Dutra and Evan Wallace.
  9. Evens, M.W.: Natural language interface for an expert system (2002) 0.00
    9.624934E-4 = product of:
      0.004812467 = sum of:
        0.004812467 = weight(_text_:information in 3719) [ClassicSimilarity], result of:
          0.004812467 = score(doc=3719,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.058186423 = fieldWeight in 3719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3719)
      0.2 = coord(1/5)
    
    Source
    Encyclopedia of library and information science. Vol.71, [=Suppl.34]
  10. Boeuf, P. le: Functional Requirements for Bibliographic Records (FRBR) : hype or cure-all (2005) 0.00
    9.074475E-4 = product of:
      0.0045372373 = sum of:
        0.0045372373 = weight(_text_:information in 175) [ClassicSimilarity], result of:
          0.0045372373 = score(doc=175,freq=4.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.054858685 = fieldWeight in 175, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=175)
      0.2 = coord(1/5)
    
    Footnote
    What is, after all the FRBR model? The question is asked in the subtitle itself: is it a "hype or cureall?" It certainly is the talk of the day in libraries and similar institutions, a very popular topic for professional meetings, a challenging task for system vendors and food for thought for scholars both in terminology and in content. As for the solutions it offers, they enable simplified and more structured catalogues of large collections and perhaps easier ways to cataloguing resources of many different types. Once implemented in catalogues, the benefits will be both on the librarian's side and on the end user's side. According to Patrick LeBoeuf the model is a beginning and there are two directions for its development as far as the authors of the articles imply: the first, oriented to the configuration of FRANAR or FRAR, the second, oriented to what has already been established and defined as FRSAR (Functional Requirements for Subject Authority Records). The latter is meant to build a conceptual model for Group 3 entities within the FRBR framework related to the aboutness of the work and assist in an assessment of the potential for international sharing and use of subject authority data both within the library sector and beyond. A third direction, not present in the work considered, yet mentioned by the editor, is oriented towards the development of "the CIDOC CRM semantic model for cultural heritage information in museums and assimilated institutions" (p. 6). By merging the FRBR working group with the CIDOC CRM Special Interest Group a FRBR/CRM Harmonization Group has been created its scope being the "translation" of FRBR into object-oriented formalism. The work under review is the expected and welcome completion of the FRBR Final Report of 1998, addressing librarians, library science teaching staff, students, and library system vendors, a comprehensive source of information on theoretical aspects and practical application of the FRBR conceptual model. A good companion clarifying many FRBR issues the collection is remarkably well structured and offers a step-by-step insight into the model. An additional feature of the work is the very helpful index at the back of the book providing an easy access to the main topics discussed."
  11. Koch, C.: Consciousness : confessions of a romantic reductionist (2012) 0.00
    9.074475E-4 = product of:
      0.0045372373 = sum of:
        0.0045372373 = weight(_text_:information in 4561) [ClassicSimilarity], result of:
          0.0045372373 = score(doc=4561,freq=4.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.054858685 = fieldWeight in 4561, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=4561)
      0.2 = coord(1/5)
    
    Content
    In which I introduce the ancient mind-body problem, explain why I am on a quest to use reason and empirical inquiry to solve it, acquaint you with Francis Crick, explain how he relates to this quest, make a confession, and end on a sad note -- In which I write about the wellsprings of my inner conflict between religion and reason, why I grew up wanting to be a scientist, why I wear a lapel pin of Professor Calculus, and how I acquired a second mentor late in life -- In which I explain why consciousness challenges the scientific view of the world, how consciousness can be investigated empirically with both feet firmly planted on the ground, why animals share consciousness with humans, and why self-consciousness is not as important as many people think it is -- In which you hear tales of scientist-magicians that make you look but not see, how they track the footprints of consciousness by peering into your skull, why you don't see with your eyes, and why attention and consciousness are not the same -- In which you learn from neurologists and neurosurgeons that some neurons care a great deal about celebrities, that cutting the cerebral cortex in two does not reduce consciousness by half, that color is leached from the world by the loss of a small cortical region, and that the destruction of a sugar cube-sized chunk of brain stem or thalamic tissue leaves you undead -- In which I defend two propositions that my younger self found nonsense--you are unaware of most of the things that go on in your head, and zombie agents control much of your life, even though you confidently believe that you are in charge -- In which I throw caution to the wind, bring up free will, Der ring des Nibelungen, and what physics says about determinism, explain the impoverished ability of your mind to choose, show that your will lags behind your brain's decision, and that freedom is just another word for feeling -- In which I argue that consciousness is a fundamental property of complex things, rhapsodize about integrated information theory, how it explains many puzzling facts about consciousness and provides a blueprint for building sentient machines -- In which I outline an electromagnetic gadget to measure consciousness, describe efforts to harness the power of genetic engineering to track consciousness in mice, and find myself building cortical observatories -- In which I muse about final matters considered off-limits to polite scientific discourse: to wit, the relationship between science and religion, the existence of God, whether this God can intervene in the universe, the death of my mentor, and my recent tribulations.
    Footnote
    Now it might seem that is a fairly well-defined scientific task: just figure out how the brain does it. In the end I think that is the right attitude to have. But our peculiar history makes it difficult to have exactly that attitude-to take consciousness as a biological phenomenon like digestion or photosynthesis, and figure out how exactly it works as a biological phenomenon. Two philosophical obstacles cast a shadow over the whole subject. The first is the tradition of God, the soul, and immortality. Consciousness is not a part of the ordinary biological world of digestion and photosynthesis: it is part of a spiritual world. It is sometimes thought to be a property of the soul and the soul is definitely not a part of the physical world. The other tradition, almost as misleading, is a certain conception of Science with a capital "S." Science is said to be "reductionist" and "materialist," and so construed there is no room for consciousness in Science. If it really exists, consciousness must really be something else. It must be reducible to something else, such as neuron firings, computer programs running in the brain, or dispositions to behavior. There are also a number of purely technical difficulties to neurobiological research. The brain is an extremely complicated mechanism with about a hundred billion neurons in ... (Rest nicht frei). " [https://www.nybooks.com/articles/2013/01/10/can-information-theory-explain-consciousness/].
  12. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.00
    8.020778E-4 = product of:
      0.004010389 = sum of:
        0.004010389 = weight(_text_:information in 1554) [ClassicSimilarity], result of:
          0.004010389 = score(doc=1554,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.048488684 = fieldWeight in 1554, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1554)
      0.2 = coord(1/5)
    
    Content
    "The Internet is often likened to an organic entity and this analogy seems particularly appropriate in the light of some striking new visualizations of the complex mesh of Internet pathways. The images are results of a new graph visualization tool, code-named Walrus, being developed by researcher, Young Hyun, at the Cooperative Association for Internet Data Analysis (CAIDA) [1]. Although Walrus is still in early days of development, I think these preliminary results are some of the most intriguing and evocative images of the Internet's structure that we have seen in last year or two. A few years back I spent an enjoyable afternoon at the Monterey Bay Aquarium and I particularly remember a stunning exhibit of jellyfish, which were illuminated with UV light to show their incredibly delicate organic structures, gently pulsing in tanks of inky black water. Jellyfish are some of the strangest, alien, and yet most beautiful, living creatures [2]. Having looked at the Walrus images I began to wonder, perhaps the backbone networks of the Internet look like jellyfish? The image above is a screengrab of a Walrus visualization of a huge graph. The graph data in this particular example depicts Internet topology, as measured by CAIDA's skitter monitor [3] based in London, showing 535,000-odd Internet nodes and over 600,000 links. The nodes, represented by the yellow dots, are a large sample of computers from across the whole range of Internet addresses. Walrus is an interactive visualization tool that allows the analyst to view massive graphs from any position. The graph is projected inside a 3D sphere using a special kind of space based hyperbolic geometry. This is a non-Euclidean space, which has useful distorting properties of making elements at the center of the display much larger than those on the periphery. You interact with the graph in Walrus by selecting a node of interest, which is smoothly moved into the center of the display, and that region of the graph becomes greatly enlarged, enabling you to focus on the fine detail. Yet the rest of the graph remains visible, providing valuable context of the overall structure. (There are some animations available on the website showing Walrus graphs being moved, which give some sense of what this is like.) Hyperbolic space projection is commonly know as "focus+context" in the field of information visualization and has been used to display all kinds of data that can be represented as large graphs in either two and three dimensions [4]. It can be thought of as a moveable fish-eye lens. The Walrus visualization tool draws much from the hyperbolic research by Tamara Munzner [5] as part of her PhD at Stanford. (Map of the Month examined some of Munzner's work from 1996 in an earlier article, Internet Arcs Around The Globe.) Walrus is being developed as a general-purpose visualization tool able to cope with massive directed graphs, in the order of a million nodes. Providing useful and interactively useable visualization of such large volumes of graph data is a tough challenge and is particularly apposite to the task of mapping of Internet backbone infrastructures. In a recent email Map of the Month asked Walrus developer Young Hyun what had been the hardest part of the project thus far. "The greatest difficulty was in determining precisely what Walrus should be about," said Hyun. Crucially "... we had to face the question of what it means to visualize a large graph. It would defeat the aim of a visualization to overload a user with the large volume of data that is likely to be associated with a large graph." I think the preliminary results available show that Walrus is heading in right direction tackling these challenges.
  13. Slavic, A.: Mapping intricacies : UDC to DDC (2010) 0.00
    8.020778E-4 = product of:
      0.004010389 = sum of:
        0.004010389 = weight(_text_:information in 3370) [ClassicSimilarity], result of:
          0.004010389 = score(doc=3370,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.048488684 = fieldWeight in 3370, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3370)
      0.2 = coord(1/5)
    
    Content
    Another challenge appears when, e.g., mapping Dewey class 890 Literatures of other specific languages and language families, which does not make sense in UDC in which all languages and literatures have equal status. Standard UDC schedules do not have a selection of preferred literatures and other literatures. In principle, UDC does not allow classes entitled 'others' which do not have defined semantic content. If entities are subdivided and there is no provision for an item outside the listed subclasses then this item is subsumed to a top class or a broader class where all unspecifiied or general members of that class may be expected. If specification is needed this can be divised by adding an alphabetical extension to the broader class. Here we have to find and list in the UDC Summary all literatures that are 'unpreferred' i.e. lumped in the 890 classes and map them again as many-to-one specific-to-broader match. The example below illustrates another interesting case. Classes Dewey 061 and UDC 06 cover roughy the same semantic field but in the subdivision the Dewey Summaries lists a combination of subject and place and as an enumerative classification, provides ready made numbers for combinations of place that are most common in an average (American?) library. This is a frequent approach in the schemes created with the physical book arrangement, i.e. library schelves, in mind. UDC, designed as an indexing language for information retrieval, keeps subject and place in separate tables and allows for any concept of place such as, e.g. (7) North America to be used in combination with any subject as these may coincide in documents. Thus combinations such as Newspapers in North America, or Organizations in North America would not be offered as ready made combinations. There is no selection of 'preferred' or 'most needed countries' or languages or cultures in the standard UDC edition: <Tabelle>
  14. Dahlberg, I.: How to improve ISKO's standing : ten desiderata for knowledge organization (2011) 0.00
    8.020778E-4 = product of:
      0.004010389 = sum of:
        0.004010389 = weight(_text_:information in 4300) [ClassicSimilarity], result of:
          0.004010389 = score(doc=4300,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.048488684 = fieldWeight in 4300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4300)
      0.2 = coord(1/5)
    
    Content
    6. Establishment of national Knowledge Organization Institutes should be scheduled by national chapters, planned energetically and submitted to corresponding administrative authorities for support. They could be attached to research institutions, e.g., the Max-Planck or Fraunhofer Institutes in Germany or to universities. Their scope and research areas relate to the elaboration of knowledge systems of subject related concepts, according to Desideratum 1, and may be connected to training activities and KOsubject-related research work. 7. ISKO experts should not accept to be impressed by Internet and Computer Science, but should demonstrate their expertise more actively on the public plane. They should tend to take a leading part in the ISKO Secretariats and the KO Institutes, and act as consultants and informants, as well as editors of statistics and other publications. 8. All colleagues trained in the field of classification/indexing and thesauri construction and active in different countries should be identified and approached for membership in ISKO. This would have to be accomplished by the General Secretariat with the collaboration of the experts in the different secretariats of the countries, as soon as they start to work. The more members ISKO will have, the greater will be its reputation and influence. But it will also prove its professionalism by the quality of its products, especially its innovating conceptual order systems to come. 9. ISKO should-especially in view of global expansion-intensify the promotion of knowledge about its own subject area through the publications mentioned here and in further publications as deemed necessary. It should be made clear that, especially in ISKO's own publications, professional subject indexes are a sine qua non. 10. 1) Knowledge Organization, having arisen from librarianship and documentation, the contents of which has many points of contact with numerous application fields, should-although still linked up with its areas of descent-be recognized in the long run as an independent autonomous discipline to be located under the science of science, since only thereby can it fully play its role as an equal partner in all application fields; and, 2) An "at-a-first-glance knowledge order" could be implemented through the Information Coding Classification (ICC), as this system is based on an entirely new approach, namely based on general object areas, thus deviating from discipline-oriented main classes of the current main universal classification systems. It can therefore recoup by simple display on screen the hitherto lost overview of all knowledge areas and fields. On "one look", one perceives 9 object areas subdivided into 9 aspects which break down into 81 subject areas with their 729 subject fields, including further special fields. The synthesis and place of order of all knowledge becomes thus evident at a glance to everybody. Nobody would any longer be irritated by the abundance of singular apparently unrelated knowledge fields or become hesitant in his/her understanding of the world.
  15. Exploring artificial intelligence in the new millennium (2003) 0.00
    6.4166234E-4 = product of:
      0.0032083115 = sum of:
        0.0032083115 = weight(_text_:information in 2099) [ClassicSimilarity], result of:
          0.0032083115 = score(doc=2099,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.03879095 = fieldWeight in 2099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=2099)
      0.2 = coord(1/5)
    
    Footnote
    The book does achieve its aim of being a starting point for someone interested in the state of some areas of AI research at the beginning of the new millennium. The book's most irritating feature is the different writing styles of the authors. The book is organized as a collection of papers similar to a typical graduate survey course packet, and as a result the book does not possess a narrative flow. Also the book contains a number of other major weaknesses such as a lack of an introductory or concluding chapter. The book could greatly benefit from an introductory chapter that would introduce readers to the areas of AI, explain why such a book is needed, and explain why each author's research is important. The manner in which the book currently handles these issues is a preface that talks about some of the above issues in a superficial manner. Also such an introductory chapter could be used to expound an what level of AI mathematical and statistical knowledge is expected from readers in order to gain maximum benefit from this book. A concluding chapter would be useful to readers interested in the other areas of AI not covered by the book, as well as open issues common to all of the research presented. In addition, most of the contributors come exclusively from the computer science field, which heavily slants the work toward the computer science community. A great deal of the research presented is being used by a number of research communities outside of computer science, such as biotechnology and information technology. A wider audience for this book could have been achieved by including a more diverse range of authors showing the interdisciplinary nature of many of these fields. Also the book's editors state, "The reader is expected to have basic knowledge of AI at the level of an introductory course to the field" (p vii), which is not the case for this book. Readers need at least a strong familiarity with many of the core concepts within AI, because a number of the chapters are shallow and terse in their historical overviews. Overall, this book would be a useful tool for a professor putting together a survey course an AI research. Most importantly the book would be useful for eager graduate students in need of a starting point for their research for their thesis. This book is best suited as a reference guide to be used by individuals with a strong familiarity with AI."
  16. Gonzalez, L.: What is FRBR? (2005) 0.00
    6.4166234E-4 = product of:
      0.0032083115 = sum of:
        0.0032083115 = weight(_text_:information in 3401) [ClassicSimilarity], result of:
          0.0032083115 = score(doc=3401,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.03879095 = fieldWeight in 3401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=3401)
      0.2 = coord(1/5)
    
    Content
    National FRBR experiments The larger the bibliographic database, the greater the effect of "FRBR-like" design in reducing the appearance of duplicate records. LC, RLG, and OCLC, all influenced by FRBR, are experimenting with the redesign of their databases. LC's Network Development and MARC Standards Office has posted at its web site the results of some of its investigations into FRBR and MARC, including possible display options for bibliographic information. The design of RLG's public catalog, RedLightGreen, has been described as "FRBR-ish" by Merrilee Proffitt, RLG's program officer. If you try a search for a prolific author or much-published title in RedLightGreen, you'll probably find that the display of search results is much different than what you would expect. OCLC Research has developed a prototype "frbrized" database for fiction, OCLC FictionFinder. Try a title search for a classic title like Romeo and Juliet and observe that OCLC includes, in the initial display of results (described as "works"), a graphic indicator (stars, ranging from one to five). These show in rough terms how many libraries own the work-Romeo and Juliet clearly gets a five. Indicators like this are something resource sharing staff can consider an "ILL quality rating." If you're intrigued by FRBR's possibilities and what they could mean to resource sharing workflow, start talking. Now is the time to connect with colleagues, your local and/or consortial system vendor, RLG, OCLC, and your professional organizations. Have input into how systems develop in the FRBR world."

Authors

Languages

Types

Themes

Subjects

Classifications