-
Linde, F.; Stock, W.G.: Information markets : a strategic guideline for the i-commerce (2011)
0.00
0.002269176 = product of:
0.004538352 = sum of:
0.004538352 = product of:
0.009076704 = sum of:
0.009076704 = weight(_text_:a in 3283) [ClassicSimilarity], result of:
0.009076704 = score(doc=3283,freq=10.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.1709182 = fieldWeight in 3283, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046875 = fieldNorm(doc=3283)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- Information Markets is a compendium of the i-commerce, the commerce with digital information, content as well as software. Information Markets is a comprehensive overview of the state of the art of economic and information science endeavors on the markets of digital information. It provides a strategic guideline for information providers how to analyse their market environment and how to develop possible strategic actions. It is a book for information professionals, both for students of LIS (Library and Information Science), CIS (Computer and Information Science) or Information Management curricula and for practitioners as well as managers in these fields.
-
Abbas, J.: Structures for organizing knowledge : exploring taxonomies, ontologies, and other schemas (2010)
0.00
0.0022374375 = product of:
0.004474875 = sum of:
0.004474875 = product of:
0.00894975 = sum of:
0.00894975 = weight(_text_:a in 480) [ClassicSimilarity], result of:
0.00894975 = score(doc=480,freq=14.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.1685276 = fieldWeight in 480, product of:
3.7416575 = tf(freq=14.0), with freq of:
14.0 = termFreq=14.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0390625 = fieldNorm(doc=480)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- LIS professionals use structures for organizing knowledge when they catalog and classify objects in the collection, when they develop databases, when they design customized taxonomies, or when they search online. Structures for Organizing Knowledge: Exploring Taxonomies, Ontologies, and Other Schema explores and explains this basic function by looking at three questions: 1) How do we organize objects so that they make sense and are useful? 2) What role do categories, classifications, taxonomies, and other structures play in the process of organizing? 3) What do information professionals need to know about organizing behaviors in order to design useful structures for organizing knowledge? Taking a broad, yet specialized approach that is a first in the field, this book answers those questions by examining three threads: traditional structures for organizing knowledge; personal structures for organizing knowledge; and socially-constructed structures for organizing knowledge. Through these threads, it offers avenues for expanding thinking on classification and classification schemes, taxonomy and ontology development, and structures. Both a history of the development of taxonomies and an analysis of current research, theories, and applications, this volume explores a wide array of topics, including the new digital, social aspect of taxonomy development. Examples of subjects covered include: Formal and informal structures Applications of knowledge structures Classification schemes Early taxonomists and their contributions Social networking, bookmarking, and cataloging sites Cataloging codes Standards and best practices Tags, tagging, and folksonomies Descriptive cataloging Metadata schema standards Thought exercises, references, and a list of helpful websites augment each section. A final chapter, "Thinking Ahead: Are We at a Crossroads?" uses "envisioning exercises" to help LIS professionals look into the future.
-
Witten, I.H.; Bainbridge, M.; Nichols, D.M.: How to build a digital library (2010)
0.00
0.0020296127 = product of:
0.0040592253 = sum of:
0.0040592253 = product of:
0.008118451 = sum of:
0.008118451 = weight(_text_:a in 4027) [ClassicSimilarity], result of:
0.008118451 = score(doc=4027,freq=18.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.15287387 = fieldWeight in 4027, product of:
4.2426405 = tf(freq=18.0), with freq of:
18.0 = termFreq=18.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.03125 = fieldNorm(doc=4027)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- "How to Build a Digital Library" is the only book that offers all the knowledge and tools needed to construct and maintain a digital library, regardless of the size or purpose. It is the perfectly self-contained resource for individuals, agencies, and institutions wishing to put this powerful tool to work in their burgeoning information treasuries. The second edition reflects new developments in the field as well as in the Greenstone Digital Library open source software. In Part I, the authors have added an entire new chapter on user groups, user support, collaborative browsing, user contributions, and so on. There is also new material on content-based queries, map-based queries, cross-media queries. There is an increased emphasis placed on multimedia by adding a 'digitizing' section to each major media type. A new chapter has also been added on 'internationalization', which will address Unicode standards, multi-language interfaces and collections, and issues with non-European languages (Chinese, Hindi, etc.). Part II, the software tools section, has been completely rewritten to reflect the new developments in Greenstone Digital Library Software, an internationally popular open source software tool with a comprehensive graphical facility for creating and maintaining digital libraries. As with the First Edition, a web site, implemented as a digital library, will accompany the book and provide access to color versions of all figures, two online appendices, a full-text sentence-level index, and an automatically generated glossary of acronyms and their definitions. In addition, demonstration digital library collections will be included to demonstrate particular points in the book. To access the online content please visit our associated website. This title outlines the history of libraries - both traditional and digital - and their impact on present practices and future directions. It is written for both technical and non-technical audiences and covers the entire spectrum of media, including text, images, audio, video, and related XML standards. It is web-enhanced with software documentation, color illustrations, full-text index, source code, and more.
-
Cultural frames of knowledge (2012)
0.00
0.001674345 = product of:
0.00334869 = sum of:
0.00334869 = product of:
0.00669738 = sum of:
0.00669738 = weight(_text_:a in 2109) [ClassicSimilarity], result of:
0.00669738 = score(doc=2109,freq=4.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.12611452 = fieldWeight in 2109, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0546875 = fieldNorm(doc=2109)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Content
- Ch. 1. Introduction: theory, knowledge organization, epistemology, culture -- ch. 3. Praxes of knowledge organization in the first Chinese library catalog, the Seven epitomes -- ch. 4. Feminist epistemologies and knowledge organization -- ch. 5. Problems and characteristics of Foucauldian discourse analysis as a research method -- ch. 6. Epistemology of domain analysis -- ch. 8. Rethinking genre in knowledge organization through a functional unit taxonomy -- Conclusions: Toward multicultural domain plurality in knowledge organization
-
Weinberger, D.: Too big to know : rethinking knowledge now that the facts aren't the facts, experts are everywhere, and the smartest person in the room is the room (2011)
0.00
0.0015662063 = product of:
0.0031324127 = sum of:
0.0031324127 = product of:
0.0062648254 = sum of:
0.0062648254 = weight(_text_:a in 334) [ClassicSimilarity], result of:
0.0062648254 = score(doc=334,freq=14.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.11796933 = fieldWeight in 334, product of:
3.7416575 = tf(freq=14.0), with freq of:
14.0 = termFreq=14.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.02734375 = fieldNorm(doc=334)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- In this title, a leading philosopher of the internet explains how knowledge and expertise can still work - and even grow stronger - in an age when the internet has made topics simply Too Big to Know. Knowing used to be so straightforward. If we wanted to know something we looked it up, asked an expert, gathered the facts, weighted the possibilities, and honed in on the best answer ourselves. But, ironically, with the advent of the internet and the limitless information it contains, we're less sure about what we know, who knows what, or even what it means to know at all. Knowledge, it would appear, is in crisis. And yet, while its very foundations seem to be collapsing, human knowledge has grown in previously unimaginable ways, and in inconceivable directions, in the Internet age. We fact-check the news media more closely and publicly than ever before. Science is advancing at an unheard of pace thanks to new collaborative techniques and new ways to find patterns in vast amounts of data. Businesses are finding expertise in every corner of their organization, and across the broad swath of their stakeholders. We are in a crisis of knowledge at the same time that we are in an epochal exaltation of knowledge. In "Too Big to Know", Internet philosopher David Weinberger explains that, rather than a systemic collapse, the Internet era represents a fundamental change in the methods we have for understanding the world around us. Weinberger argues that our notions of expertise - what it is, how it works, and how it is cultivated - are out of date, rooted in our pre-networked culture and assumptions. For thousands of years, we've relied upon a reductionist process of filtering, winnowing, and otherwise reducing the complex world to something more manageable in order to understand it. Back then, an expert was someone who had mastered a particular, well-defined domain. Now, we live in an age when topics are blown apart and stitched together by momentary interests, diverse points of view, and connections ranging from the insightful to the perverse. Weinberger shows that, while the limits of our own paper-based tools have historically prevented us from achieving our full capacity of knowledge, we can now be as smart as our new medium allows - but we will be smart differently. For the new medium is a network, and that network changes our oldest, most basic strategy of knowing. Rather than knowing-by-reducing, we are now knowing-by-including. Indeed, knowledge now is best thought of not as the content of books or even of minds, but as the way the network works. Knowledge will never be the same - not for science, not for business, not for education, not for government, not for any of us. As Weinberger makes clear, to make sense of this new system of knowledge, we need - and smart companies are developing - networks that are themselves experts. Full of rich and sometimes surprising examples from history, politics, business, philosophy, and science, "Too Big to Know" describes how the very foundations of knowledge have been overturned, and what this revolution means for our future.
-
Marchionini, G.: Information concepts : from books to cyberspace identities (2010)
0.00
0.0015127839 = product of:
0.0030255679 = sum of:
0.0030255679 = product of:
0.0060511357 = sum of:
0.0060511357 = weight(_text_:a in 2) [ClassicSimilarity], result of:
0.0060511357 = score(doc=2,freq=10.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.11394546 = fieldWeight in 2, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.03125 = fieldNorm(doc=2)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- Information is essential to all human activity, and information in electronic form both amplifies and augments human information interactions. This lecture surveys some of the different classical meanings of information, focuses on the ways that electronic technologies are affecting how we think about these senses of information, and introduces an emerging sense of information that has implications for how we work, play, and interact with others. The evolutions of computers and electronic networks and people's uses and adaptations of these tools manifesting a dynamic space called cyberspace. Our traces of activity in cyberspace give rise to a new sense of information as instantaneous identity states that I term proflection of self. Proflections of self influence how others act toward us. Four classical senses of information are described as context for this new form of information. The four senses selected for inclusion here are the following: thought and memory, communication process, artifact, and energy. Human mental activity and state (thought and memory) have neurological, cognitive, and affective facets.The act of informing (communication process) is considered from the perspective of human intentionality and technical developments that have dramatically amplified human communication capabilities. Information artifacts comprise a common sense of information that gives rise to a variety of information industries. Energy is the most general sense of information and is considered from the point of view of physical, mental, and social state change. This sense includes information theory as a measurable reduction in uncertainty. This lecture emphasizes how electronic representations have blurred media boundaries and added computational behaviors that yield new forms of information interaction, which, in turn, are stored, aggregated, and mined to create profiles that represent our cyber identities.
-
Colomb, R.M.: Information spaces : the architecture of cyberspace (2002)
0.00
0.0011959607 = product of:
0.0023919214 = sum of:
0.0023919214 = product of:
0.0047838427 = sum of:
0.0047838427 = weight(_text_:a in 262) [ClassicSimilarity], result of:
0.0047838427 = score(doc=262,freq=4.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.090081796 = fieldWeight in 262, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0390625 = fieldNorm(doc=262)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- The Architecture of Cyberspace is aimed at students taking information management as a minor in their course as well as those who manage document collections but who are not professional librarians. The first part of this book looks at how users find documents and the problems they have; the second part discusses how to manage the information space using various tools such as classification and controlled vocabularies. It also explores the general issues of publishing, including legal considerations, as well the main issues of creating and managing archives. Supported by exercises and discussion questions at the end of each chapter, the book includes some sample assignments suitable for use with students of this subject. A glossary is also provided to help readers understand the specialised vocabulary and the key concepts in the design and assessment of information spaces.
-
Grigonyte, G.: Building and evaluating domain ontologies : NLP contributions (2010)
0.00
0.0011839407 = product of:
0.0023678814 = sum of:
0.0023678814 = product of:
0.0047357627 = sum of:
0.0047357627 = weight(_text_:a in 481) [ClassicSimilarity], result of:
0.0047357627 = score(doc=481,freq=2.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.089176424 = fieldWeight in 481, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0546875 = fieldNorm(doc=481)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- An ontology is a knowledge representation structure made up of concepts and their interrelations. It represents shared understanding delineated by some domain. The building of an ontology can be addressed from the perspective of natural language processing. This thesis discusses the validity and theoretical background of knowledge acquisition from natural language. It also presents the theoretical and experimental framework for NLP-driven ontology building and evaluation tasks.
-
Spitta, T.: Informationswirtschaft : eine Einführung (2006)
0.00
9.4548997E-4 = product of:
0.0018909799 = sum of:
0.0018909799 = product of:
0.0037819599 = sum of:
0.0037819599 = weight(_text_:a in 636) [ClassicSimilarity], result of:
0.0037819599 = score(doc=636,freq=10.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.07121591 = fieldWeight in 636, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.01953125 = fieldNorm(doc=636)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Footnote
- Stattdessen präferiert er in erster Näherung A. Endres' Definition, die auch von vielen Wirtschaftsinformatikern - und auch von BID-Fachleuten - akzeptiert wird: "Information, so Spitta, sind interpretierbare, d.h. mit Bedeutung verknüpfte, meist neue Nachrichten, die von einem Empfänger für das Verfolgen seiner Ziele als nützlich gewertet werden." (S. 44, Hervor. i. Org.). Damit kann sich Spitta jedoch noch nicht begnügen, da er Bezug zur Informatik, insbesondere jedoch zur Wirtschaftsinformatik oder Informationswirtschaft herstellen muss. A. Endres definiert Information als ein Tripel I = (A*, S, K), "wobei A* (...) eine Menge von Wörtern (...) über einem Alphabet A, S eine Menge von Symbolen und K einen Kontext bedeutet" (S. 45) "Information", so Spitta, wäre dann "eine Nachricht über einem definierten Alphabet und anderen Symbolen, die für den Empfänger neu und relevant ist und deren Kontext er kennt." (S. 45) In der betrieblichen Praxis sind jedoch mehrere Akteure involviert, sodass es sinnvoll ist "Datenspeicher als Puffer für Nachrichten" (S. 46) einzuführen. Nun fehlt noch eine Definition von Wissen, um betriebliche Prozesse computergestützt abzubilden. "Wissen ist wegen seiner impliziten Bestandteile ebenso individualisiert wie Information. Explizites Wissen basiert auf Daten. Implizites Wissen kann explizit gemacht werden, wenn wir über einen allgemein verstehbaren Formalismus zur Beschreibung verfügen. Der Unterschied zur Information ist die Handlungskompetenz und die mehrfache Verwendbarkeit der zur Grunde liegenden Daten, die im Allgemeinen nur einmal (individuelle) Information sein können." (S. 50, Hervor. i. Orig.) Im Kapitel 5 stehen die "Inhalte betrieblicher Daten" im Mittelpunkt, wo es unter anderem um Grunddaten (wie Sachanlagen), Vorgangsdaten (wie aufzeichnungspflichtige Vorgänge) oder abgeleitete Daten (wie Führungsinformation) geht. Das 6. Kapitel "Die Struktur betrieblicher Daten" geht auf verschiedene Datenmodelle, wie das Relationsmodell, das grafische Objektmodell oder das Vorgehensmodell zur Datenmodellierung ein. Im 7. Kapitel geht es um "Anwendungssysteme", d.h. um betriebliche Informationssysteme und deren Strukturierungsmöglichkeiten.
-
Widén-Wulff, G.: ¬The challenges of knowledge sharing in practice : a social approach (2007)
0.00
9.4548997E-4 = product of:
0.0018909799 = sum of:
0.0018909799 = product of:
0.0037819599 = sum of:
0.0037819599 = weight(_text_:a in 727) [ClassicSimilarity], result of:
0.0037819599 = score(doc=727,freq=10.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.07121591 = fieldWeight in 727, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.01953125 = fieldNorm(doc=727)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- This book looks at the key skills that are required in organizations in the information intensive society; it also examines the power of information behaviour on the construction of different kinds of shared knowledge and social identity in a group. The book introduces the different dimensions of social capital that is structural and cognitive, and looks at the relational aspects of information behaviour in organizations. This book analyses experiences with two different case studies - in the financial and biotechnology industries - in order to gain additional insights in how the internal organization environment should be designed to support the development of the organization's intellectual capital. Key Features 1. Introduces social capital dimensions to the knowledge management framework 2. Provides empirical work on the new combination of social capital and organizational information behaviour. 3. Two different information sharing practices are presented: a claims handling unit (routine based work) and a biotechnology firm (expert work) 4. Develops social capital measures into qualitative information research 5.The book illustrates the importance of social aspects in ma She has worked as a visiting researcher at Napier University, Edinburgh, 2004-05. Her teaching and research concerns information seeking, information management in business organizations, and aspects of social capital and knowledge sharing in groups and organizations. She has published several articles and papers in these areas. Readership The book is aimed at academics and students at all levels in library and information science, as well as information management and knowledge management practitioners and managers interested in managing information and knowledge effectively.Contents Part 1: Theories of Information Sharing Information sharing in context Patterns of sharing - enablers and barriers Social navigation Part II: Two Practices in Information Sharing Introducing the two cases Claims handlers Expert organisation Part III: Insights into Information, Knowledge Sharing and Social Capital Dimensions of social capital in the two cases Social capital and sharing - building structures for knowledge sharing and its management Importance of the awareness of social capital in connection with information and knowledge sharing in today's companies.
-
Weller, K.: Knowledge representation in the Social Semantic Web (2010)
0.00
8.371725E-4 = product of:
0.001674345 = sum of:
0.001674345 = product of:
0.00334869 = sum of:
0.00334869 = weight(_text_:a in 4515) [ClassicSimilarity], result of:
0.00334869 = score(doc=4515,freq=4.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.06305726 = fieldWeight in 4515, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.02734375 = fieldNorm(doc=4515)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- The main purpose of this book is to sum up the vital and highly topical research issue of knowledge representation on the Web and to discuss novel solutions by combining benefits of folksonomies and Web 2.0 approaches with ontologies and semantic technologies. This book contains an overview of knowledge representation approaches in past, present and future, introduction to ontologies, Web indexing and in first case the novel approaches of developing ontologies. This title combines aspects of knowledge representation for both the Semantic Web (ontologies) and the Web 2.0 (folksonomies). Currently there is no monographic book which provides a combined overview over these topics. focus on the topic of using knowledge representation methods for document indexing purposes. For this purpose, considerations from classical librarian interests in knowledge representation (thesauri, classification schemes etc.) are included, which are not part of most other books which have a stronger background in computer science.
-
Manning, C.D.; Raghavan, P.; Schütze, H.: Introduction to information retrieval (2008)
0.00
6.765375E-4 = product of:
0.001353075 = sum of:
0.001353075 = product of:
0.00270615 = sum of:
0.00270615 = weight(_text_:a in 4041) [ClassicSimilarity], result of:
0.00270615 = score(doc=4041,freq=2.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.050957955 = fieldWeight in 4041, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.03125 = fieldNorm(doc=4041)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Content
- Inhalt: Boolean retrieval - The term vocabulary & postings lists - Dictionaries and tolerant retrieval - Index construction - Index compression - Scoring, term weighting & the vector space model - Computing scores in a complete search system - Evaluation in information retrieval - Relevance feedback & query expansion - XML retrieval - Probabilistic information retrieval - Language models for information retrieval - Text classification & Naive Bayes - Vector space classification - Support vector machines & machine learning on documents - Flat clustering - Hierarchical clustering - Matrix decompositions & latent semantic indexing - Web search basics - Web crawling and indexes - Link analysis Vgl. die digitale Fassung unter: http://nlp.stanford.edu/IR-book/pdf/irbookprint.pdf.
-
Web 2.0 in der Unternehmenspraxis : Grundlagen, Fallstudien und Trends zum Einsatz von Social-Software (2009)
0.00
4.2283593E-4 = product of:
8.4567186E-4 = sum of:
8.4567186E-4 = product of:
0.0016913437 = sum of:
0.0016913437 = weight(_text_:a in 2917) [ClassicSimilarity], result of:
0.0016913437 = score(doc=2917,freq=2.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.03184872 = fieldWeight in 2917, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.01953125 = fieldNorm(doc=2917)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Editor
- Back, A. u.a.