Search (34 results, page 2 of 2)

  • × classification_ss:"06.74 Informationssysteme"
  • × type_ss:"m"
  1. Rosenfeld, L.; Morville, P.: Information architecture for the World Wide Web : designing large-scale Web sites (1998) 0.01
    0.007442072 = product of:
      0.055815537 = sum of:
        0.051586136 = weight(_text_:allgemeines in 493) [ClassicSimilarity], result of:
          0.051586136 = score(doc=493,freq=4.0), product of:
            0.16533206 = queryWeight, product of:
              5.705423 = idf(docFreq=399, maxDocs=44218)
              0.028978055 = queryNorm
            0.31201532 = fieldWeight in 493, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.705423 = idf(docFreq=399, maxDocs=44218)
              0.02734375 = fieldNorm(doc=493)
        0.0042293994 = product of:
          0.008458799 = sum of:
            0.008458799 = weight(_text_:information in 493) [ClassicSimilarity], result of:
              0.008458799 = score(doc=493,freq=12.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.16628155 = fieldWeight in 493, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=493)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Some web sites "work" and some don't. Good web site consultants know that you can't just jump in and start writing HTML, the same way you can't build a house by just pouring a foundation and putting up some walls. You need to know who will be using the site, and what they'll be using it for. You need some idea of what you'd like to draw their attention to during their visit. Overall, you need a strong, cohesive vision for the site that makes it both distinctive and usable. Information Architecture for the World Wide Web is about applying the principles of architecture and library science to web site design. Each web site is like a public building, available for tourists and regulars alike to breeze through at their leisure. The job of the architect is to set up the framework for the site to make it comfortable and inviting for people to visit, relax in, and perhaps even return to someday. Most books on web development concentrate either on the aesthetics or the mechanics of the site. This book is about the framework that holds the two together. With this book, you learn how to design web sites and intranets that support growth, management, and ease of use. Special attention is given to: * The process behind architecting a large, complex site * Web site hierarchy design and organization Information Architecture for the World Wide Web is for webmasters, designers, and anyone else involved in building a web site. It's for novice web designers who, from the start, want to avoid the traps that result in poorly designed sites. It's for experienced web designers who have already created sites but realize that something "is missing" from their sites and want to improve them. It's for programmers and administrators who are comfortable with HTML, CGI, and Java but want to understand how to organize their web pages into a cohesive site. The authors are two of the principals of Argus Associates, a web consulting firm. At Argus, they have created information architectures for web sites and intranets of some of the largest companies in the United States, including Chrysler Corporation, Barron's, and Dow Chemical.
    Classification
    ST 200 Informatik / Monographien / Vernetzung, verteilte Systeme / Allgemeines, Netzmanagement
    LCSH
    Information storage and retrieval systems / Architecture
    RVK
    ST 200 Informatik / Monographien / Vernetzung, verteilte Systeme / Allgemeines, Netzmanagement
    Subject
    Information storage and retrieval systems / Architecture
  2. Lohmann, H.: KASCADE: Dokumentanreicherung und automatische Inhaltserschließung : Projektbericht und Ergebnisse des Retrievaltests (2000) 0.00
    0.0027598047 = product of:
      0.020698534 = sum of:
        0.01825669 = weight(_text_:und in 494) [ClassicSimilarity], result of:
          0.01825669 = score(doc=494,freq=22.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.28425696 = fieldWeight in 494, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=494)
        0.0024418447 = product of:
          0.0048836893 = sum of:
            0.0048836893 = weight(_text_:information in 494) [ClassicSimilarity], result of:
              0.0048836893 = score(doc=494,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0960027 = fieldWeight in 494, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=494)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Der Test hat gezeigt, dass die Ergänzung der bibliothekarischen Titelaufnahme um zusätzliche inhaltsrelevante Daten zu einer beeindruckenden Verbesserung der Suchergebnisse führt. Die Dokumentanreicherung sollte daher als Ziel bibliothekarischer Bemühungen um eine Verbesserung des OPAC-Retrievals weiterverfolgt werden. Der im Projekt eingeschlagene Weg, die Inhaltsverzeichnisse zu scannen, erwies sich allerdings als wenig sinnvoll. Zwar erzielte das Scanningverfahren gute Ergebnisse, auch arbeitete die Texterkennungssoftware sehr zuverlässig. Das Scanning bietet darüber hinaus die Möglichkeit, die dabei angefertigte Grafik-Datei mit dem Titelsatz im OPAC zu verknüpfen und so dem Nutzer als Entscheidungshilfe bei der Ergebnismengenbeurteilung an die Hand zu geben. Die Arbeiten am Aufbau der Testdatenbank brachten aber die Erkenntnis, dass die Anreicherung im Wege des Scanning technisch außerordentlich problematisch ist und einen nicht vorauszusehenden und letztlich auch nicht zu rechtfertigenden Aufwand erfordert. Diese Methode der Anreicherung kann daher für einen Praxiseinsatz nicht empfohlen werden.
    Abgesehen von diesen Überlegungen müssten für einen praktischen Einsatz der KASCADE-Entwicklungen weitere Voraussetzungen geschaffen werden. Erforderlich wäre zunächst die Optimierung und Rationalisierung der Verfahrensabläufe selbst. Die Teilprogramme unter KasKoll sollten in ein kompaktes Programm integriert werden. Die Sortiervorgänge könnten vereinfacht werden, indem die Deskriptoren in eine relationale Datenbank überführt werden. Letztendlich wirken sich diese Punkte aber vor allem auf die Dauer der Maschinenlaufzeiten aus, die bei der Frage nach den Implementierungskosten letztlich nur eine untergeordnete Rolle spielen. Optimiert werden sollte die Oberfläche zur Steuerung des Verfahrens. Bereits jetzt laufen einige der Programme unter einer menügeführten Windows-Schnittstelle (Kasadew) ab, was für alle Verfahrensteile erreicht werden sollte. Schließlich ist zu klären, unter welchen Bedingungen das Gewichtungsverfahren im Praxisbetrieb ablaufen kann.
    Da sich mit jedem Dokument, das zu dem im Gewichtungsverfahren befindlichen Gesamtbestand hinzukommt, die Werte aller bereits gewichteten Deskriptoren ändern können, müsste die Berechnung der Häufigkeitsverteilung jeder Grundform im Prinzip nach jeder Änderung im Dokumentbestand neu berechnet werden. Eine Online-Aktualisierung des Bestandes erscheint daher wenig sinnvoll. In der Praxis könnte eine Neuberechnung in bestimmten zeitlichen Abständen mit einem Abzug des OPAC-Bestandes unabhängig vom eigentlichen Betrieb des OPAC erfolgen, was auch insofern genügen würde, als die zugrunde liegenden Maße auf relativen Häufigkeiten basieren. Dadurch würde nur ein geringer Verzug in der Bereitstellung der aktuellen Gewichte eintreten. Außerdem würde der Zeitfaktor eine nur untergeordnete Rolle spielen, da ein offline ablaufender Gewichtungslauf erst bis zum nächsten Aktualisierungszeitpunkt abgeschlossen sein müsste. Denkbar wäre zusätzlich, für die Zeit zwischen zwei Aktualisierungen des OPACs für die in den Neuzugängen enthaltenen Begriffe Standardgewichte einzusetzen, soweit diese Begriffe bereits in dem Bestand auftreten. Bei entsprechender Optimierung und Rationalisierung der SELIX-Verfahrensabläufe, Nutzung der Gewichte auf der Retrievalseite für ein Ranking der auszugebenden Dokumente und Integration der THEAS-Komponente kann das Verfahren zu einem wirkungsvollen Instrument zur Verbesserung der Retrievaleffektivität weiterentwickelt werden.
    Footnote
    Zugl.: Köln, Fachhochsch., Fachbereich Bibliotheks- und Informationswesen, Hausarbeit
    Imprint
    Düsseldorf : Universitäts- und Landesbibliothek
    RSWK
    Online-Katalog / Automatische Indexierung / Inhaltsverzeichnis / Scanning / Information Retrieval / Projekt
    Series
    Schriften der Universitäts- und Landesbibliothek Düsseldorf; 31
    Subject
    Online-Katalog / Automatische Indexierung / Inhaltsverzeichnis / Scanning / Information Retrieval / Projekt
  3. Gugerli, D.: Suchmaschinen : die Welt als Datenbank (2009) 0.00
    0.0027445643 = product of:
      0.020584231 = sum of:
        0.017793551 = weight(_text_:und in 1160) [ClassicSimilarity], result of:
          0.017793551 = score(doc=1160,freq=16.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.27704588 = fieldWeight in 1160, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=1160)
        0.0027906797 = product of:
          0.0055813594 = sum of:
            0.0055813594 = weight(_text_:information in 1160) [ClassicSimilarity], result of:
              0.0055813594 = score(doc=1160,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.10971737 = fieldWeight in 1160, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1160)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Plötzlich diese Übersicht. Milliarden von Seiten, in Sekundenbruchteilen durchsucht, als Trefferliste sauber angezeigt, mit der größten Selbstverständlichkeit sortiert nach Rang und Namen. Google bestimmt die Routinen des Alltags und ist dennoch nicht die Suchmaschine schlechthin. Auch außerhalb des World Wide Web gibt es zahllose, technisch hochgerüstete Prozeduren des Suchens. Die gegenwärtige Selbstverständlichkeit der einen Suchmaschine läßt leicht übersehen, daß Suchmaschinen einen Interessenkonflikt induzieren zwischen jenen, die sie einsetzen wollen, und jenen, auf die sie angesetzt werden. Ihr prekärer Status im Spannungsfeld zwischen Übersicht und Überwachung wird verdrängt. Anhand von vier Fallstudien zeigt David Gugerli die Entwicklung der Suchmaschine auf, von den frühen Fernseh-Ratespielen, von Robert Lembkes Unterhaltungsshow »Was bin ich?«, über Eduard Zimmermanns Fahndungssendung »Aktenzeichen XY« und Horst Herolds »Kybernetik der Polizei« bis zu der von Ted Codd ausgehenden Entwicklung der relationalen Datenbank. Während Lembke auf die Feststellung von Normalität ausgerichtet war, suchte Zimmermann die Devianz, Herold die Muster und Codd die allgemeingültige Such- und Abfragesprache für in Form gebrachte Wissensbestände, die man seit Mitte der sechziger Jahre Datenbanken nennt. »Die Geschichte der Suchmaschine ist eine eminent politische. Mit Suchmaschinen lassen sich Hoffnungen auf Fundamentaldemokratisierung und informationelle Emanzipation ebenso verbinden wie Horrorvisionen eines Orwellschen Überwachungsstaats, der über ein technokratisches Wissensmonopol verfügt.«
    LCSH
    Information society
    Subject
    Information society
  4. Conner-Sax, K.; Krol, E.: ¬The whole Internet : the next generation (1999) 0.00
    0.0015026834 = product of:
      0.022540249 = sum of:
        0.022540249 = sum of:
          0.006835742 = weight(_text_:information in 1448) [ClassicSimilarity], result of:
            0.006835742 = score(doc=1448,freq=6.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.1343758 = fieldWeight in 1448, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=1448)
          0.015704507 = weight(_text_:22 in 1448) [ClassicSimilarity], result of:
            0.015704507 = score(doc=1448,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.15476047 = fieldWeight in 1448, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1448)
      0.06666667 = coord(1/15)
    
    Abstract
    For a snapshot of something that is mutating as quickly as the Internet, The Whole Internet: The Next Generation exhibits remarkable comprehensiveness and accuracy. It's a good panoramic shot of Web sites, Usenet newsgroups, e-mail, mailing lists, chat software, electronic commerce, and the communities that have begun to emerge around all of these. This is the book to buy if you have a handle on certain aspects of the Internet experience--e-mail and Web surfing, for example--but want to learn what else the global network has to offer--say, Web banking or mailing-list management. The authors clearly have seen a thing or two online and are able to share their experiences entertainingly and with clarity. However, they commit the mistake of misidentifying an Amazon.com book review as a publisher's synopsis of a book. Aside from that transgression, The Whole Internet presents detailed information on much of the Internet. In most cases, coverage explains what something (online stock trading, free homepage sites, whatever) is all about and then provides you with enough how-to information to let you start exploring on your own. Coverage ranges from the super-basic (how to surf) to the fairly complex (sharing an Internet connection among several home computers on a network). Along the way, readers get insight into buying, selling, meeting, relating, and doing most everything else on the Internet. While other books explain the first steps into the Internet community with more graphics, this one will remain useful to the newcomer long after he or she has become comfortable using the Internet.
    Content
    Topics covered: Basic Internet connectivity, Internet software, mailing lists, newsgroups, netiquette, personal information security, shopping, auctions, games, basic Web publishing with HTML, and advanced home connectivity with local area networking.
    Footnote
    Rez. in: Internet Professionell. 2000, H.2, S.22
  5. Hildebrand, J.: Internet: Ratgeber für Lehrer (2000) 0.00
    0.001482796 = product of:
      0.022241939 = sum of:
        0.022241939 = weight(_text_:und in 4839) [ClassicSimilarity], result of:
          0.022241939 = score(doc=4839,freq=4.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.34630734 = fieldWeight in 4839, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=4839)
      0.06666667 = coord(1/15)
    
    BK
    81.68 Computereinsatz in Unterricht und Ausbildung
    Classification
    81.68 Computereinsatz in Unterricht und Ausbildung
  6. Cole, C.: Information need : a theory connecting information search to knowledge formation (2012) 0.00
    0.001116272 = product of:
      0.016744079 = sum of:
        0.016744079 = product of:
          0.033488158 = sum of:
            0.033488158 = weight(_text_:information in 4985) [ClassicSimilarity], result of:
              0.033488158 = score(doc=4985,freq=64.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.6583042 = fieldWeight in 4985, product of:
                  8.0 = tf(freq=64.0), with freq of:
                    64.0 = termFreq=64.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4985)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    Inhalt: The importance of information need -- The history of information need -- The framework for our discussion -- Modeling the user in information search -- Information seeking's conceptualization of information need during information search -- Information use -- Adaptation : internal information flows and knowledge generation -- A theory of information need -- How information need works -- The user's situation in the pre-focus search -- The situation of user's information need in pre-focus information search -- The selection concept -- A review of the user's pre-focus information search -- How information need works in a focusing search -- Circles 1 to 5 : how information need works -- Corroborating research -- Applying information need -- The astrolabe : an information system for stage 3 information exploration -- Conclusion.
    LCSH
    Information behavior
    Information retrieval
    Information storage and retrieval systems
    Human information processing
    Information theory
    RSWK
    Informationsverhalten / Information Retrieval / Informationstheorie
    Subject
    Informationsverhalten / Information Retrieval / Informationstheorie
    Information behavior
    Information retrieval
    Information storage and retrieval systems
    Human information processing
    Information theory
  7. Rogers, R.: Digital methods (2013) 0.00
    8.387961E-4 = product of:
      0.012581941 = sum of:
        0.012581941 = weight(_text_:und in 2354) [ClassicSimilarity], result of:
          0.012581941 = score(doc=2354,freq=8.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.19590102 = fieldWeight in 2354, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
      0.06666667 = coord(1/15)
    
    BK
    54.08 Informatik in Beziehung zu Mensch und Gesellschaft
    Classification
    54.08 Informatik in Beziehung zu Mensch und Gesellschaft
    RSWK
    Informations- und Dokumentationswissenschaft / Internet / Methodologie
    Subject
    Informations- und Dokumentationswissenschaft / Internet / Methodologie
  8. Golub, K.: Subject access to information : an interdisciplinary approach (2015) 0.00
    5.9199263E-4 = product of:
      0.008879889 = sum of:
        0.008879889 = product of:
          0.017759778 = sum of:
            0.017759778 = weight(_text_:information in 134) [ClassicSimilarity], result of:
              0.017759778 = score(doc=134,freq=18.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.34911853 = fieldWeight in 134, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=134)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Drawing on the research of experts from the fields of computing and library science, this ground-breaking work will show you how to combine two very different approaches to classification to create more effective, user-friendly information-retrieval systems. * Provides an interdisciplinary overview of current and potential approaches to organizing information by subject * Covers both pure computer science and pure library science topics in easy-to-understand language accessible to audiences from both disciplines * Reviews technological standards for representation, storage, and retrieval of varied knowledge-organization systems and their constituent elements * Suggests a collaborative approach that will reduce duplicate efforts and make it easier to find solutions to practical problems.
    Content
    Organizing information by subjectKnowledge organization systems (KOSs) -- Technological standards -- Automated tools for subject information organization : selected topics -- Perspectives for the future.
    LCSH
    Information organization
    Information storage and retrieval systems
    Subject
    Information organization
    Information storage and retrieval systems
  9. Rijsbergen, K. van: ¬The geometry of information retrieval (2004) 0.00
    5.88327E-4 = product of:
      0.008824904 = sum of:
        0.008824904 = product of:
          0.017649809 = sum of:
            0.017649809 = weight(_text_:information in 5459) [ClassicSimilarity], result of:
              0.017649809 = score(doc=5459,freq=10.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.3469568 = fieldWeight in 5459, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5459)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    LCSH
    Information storage and retrieval systems / Mathematics
    RSWK
    Information Retrieval / Mengenlehre / Hilbert-Raum / Vektorraum / Aussagenlogik
    Subject
    Information Retrieval / Mengenlehre / Hilbert-Raum / Vektorraum / Aussagenlogik
    Information storage and retrieval systems / Mathematics
  10. Towards the Semantic Web : ontology-driven knowledge management (2004) 0.00
    3.2723622E-4 = product of:
      0.004908543 = sum of:
        0.004908543 = product of:
          0.009817086 = sum of:
            0.009817086 = weight(_text_:information in 4401) [ClassicSimilarity], result of:
              0.009817086 = score(doc=4401,freq=22.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19298252 = fieldWeight in 4401, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4401)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    With the current changes driven by the expansion of the World Wide Web, this book uses a different approach from other books on the market: it applies ontologies to electronically available information to improve the quality of knowledge management in large and distributed organizations. Ontologies are formal theories supporting knowledge sharing and reuse. They can be used to explicitly represent semantics of semi-structured information. These enable sophisticated automatic support for acquiring, maintaining and accessing information. Methodology and tools are developed for intelligent access to large volumes of semi-structured and textual information sources in intra- and extra-, and internet-based environments to employ the full power of ontologies in supporting knowledge management from the information client perspective and the information provider. The aim of the book is to support efficient and effective knowledge management and focuses on weakly-structured online information sources. It is aimed primarily at researchers in the area of knowledge management and information retrieval and will also be a useful reference for students in computer science at the postgraduate level and for business managers who are aiming to increase the corporations' information infrastructure. The Semantic Web is a very important initiative affecting the future of the WWW that is currently generating huge interest. The book covers several highly significant contributions to the semantic web research effort, including a new language for defining ontologies, several novel software tools and a coherent methodology for the application of the tools for business advantage. It also provides 3 case studies which give examples of the real benefits to be derived from the adoption of semantic-web based ontologies in "real world" situations. As such, the book is an excellent mixture of theory, tools and applications in an important area of WWW research. * Provides guidelines for introducing knowledge management concepts and tools into enterprises, to help knowledge providers present their knowledge efficiently and effectively. * Introduces an intelligent search tool that supports users in accessing information and a tool environment for maintenance, conversion and acquisition of information sources. * Discusses three large case studies which will help to develop the technology according to the actual needs of large and or virtual organisations and will provide a testbed for evaluating tools and methods. The book is aimed at people with at least a good understanding of existing WWW technology and some level of technical understanding of the underpinning technologies (XML/RDF). It will be of interest to graduate students, academic and industrial researchers in the field, and the many industrial personnel who are tracking WWW technology developments in order to understand the business implications. It could also be used to support undergraduate courses in the area but is not itself an introductory text.
  11. Intner, S.S.; Lazinger, S.S.; Weihs, J.: Metadata and its impact on libraries (2005) 0.00
    2.79068E-4 = product of:
      0.0041860198 = sum of:
        0.0041860198 = product of:
          0.0083720395 = sum of:
            0.0083720395 = weight(_text_:information in 339) [ClassicSimilarity], result of:
              0.0083720395 = score(doc=339,freq=36.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.16457605 = fieldWeight in 339, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=339)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    What is metadata? - Metadata schemas & their relationships to particular communities - Library and information-related metadata schemas - Creating library metadata for monographic materials - Creating library metadata for continuing materials - Integrating library metadata into local cataloging and bibliographic - databases - Digital collections/digital libraries - Archiving & preserving digital materials - Impact of digital resources on library services - Future possibilities
    Footnote
    Rez. in: JASIST. 58(2007) no.6., S.909-910 (A.D. Petrou): "A division in metadata definitions for physical objects vs. those for digital resources offered in Chapter 1 is punctuated by the use of broader, more inclusive metadata definitions, such as data about data as well as with the inclusion of more specific metadata definitions intended for networked resources. Intertwined with the book's subject matter, which is to "distinguish traditional cataloguing from metadata activity" (5), the authors' chosen metadata definition is also detailed on page 5 as follows: Thus while granting the validity of the inclusive definition, we concentrate primarily on metadata as it is most commonly thought of both inside and outside of the library community, as "structured information used to find, access, use and manage information resources primarily in a digital environment." (International Encyclopedia of Information and Library Science, 2003) Metadata principles discussed by the authors include modularity, extensibility, refinement and multilingualism. The latter set is followed by seven misconceptions about metadata. Two types of metadata discussed are automatically generated indexes and manually created records. In terms of categories of metadata, the authors present three sets of them as follows: descriptive, structural, and administrative metadata. Chapter 2 focuses on metadata for communities of practice, and is a prelude to content in Chapter 3 where metadata applications, use, and development are presented from the perspective of libraries. Chapter 2 discusses the emergence and impact of metadata on organization and access of online resources from the perspective of communities for which such standards exist and for the need for mapping one standard to another. Discussion focuses on metalanguages, such as Standard Generalized Markup Language (SGML) and eXtensible Markup Language (XML), "capable of embedding descriptive elements within the document markup itself' (25). This discussion falls under syntactic interoperability. For semantic interoperability, HTML and other mark-up languages, such as Text Encoding Initiative (TEI) and Computer Interchange of Museum Information (CIMI), are covered. For structural interoperability, Dublin Core's 15 metadata elements are grouped into three areas: content (title, subject, description, type, source, relation, and coverage), intellectual property (creator, publisher, contributor and rights), and instantiation (date, format, identifier, and language) for discussion.
    Other selected specialized metadata element sets or schemas, such as Government Information Locator Service (GILS), are presented. Attention is brought to the different sets of elements and the need for linking up these elements across metadata schemes from a semantic point of view. It is no surprise, then, that after the presentation of additional specialized sets of metadata from the educational community and the arts sector, attention is turned to the discussion of Crosswalks between metadata element sets or the mapping of one metadata standard to another. Finally, the five appendices detailing elements found in Dublin Core, GILS, ARIADNE versions 3 and 3. 1, and Categories for the Description of Works of Art are an excellent addition to this chapter's focus on metadata and communities of practice. Chapters 3-6 provide an up-to-date account of the use of metadata standards in Libraries from the point of view of a community of practice. Some of the content standards included in these four chapters are AACR2, Dewey Decimal Classification (DDC), and Library of Congress Subject Classification. In addition, uses of MARC along with planned implementations of the archival community's encoding scheme, EAD, are covered in detail. In a way, content in these chapters can be considered as a refresher course on the history, current state, importance, and usefulness of the above-mentioned standards in Libraries. Application of the standards is offered for various types of materials, such as monographic materials, continuing resources, and integrating library metadata into local catalogs and databases. A review of current digital library projects takes place in Chapter 7. While details about these projects tend to become out of date fast, the sections on issues and problems encountered in digital projects and successes and failures deserve any reader's close inspection. A suggested model is important enough to merit a specific mention below, in a short list format, as it encapsulates lessons learned from issues, problems, successes, and failures in digital projects. Before detailing the model, however, the various projects included in Chapter 7 should be mentioned. The projects are: Colorado Digitization Project, Cooperative Online Resource Catalog (an Office of Research project by OCLC, Inc.), California Digital Library, JSTOR, LC's National Digital Library Program and VARIATIONS.
    Chapter 8 discusses issues of archiving and preserving digital materials. The chapter reiterates, "What is the point of all of this if the resources identified and catalogued are not preserved?" (Gorman, 2003, p. 16). Discussion about preservation and related issues is organized in five sections that successively ask why, what, who, how, and how much of the plethora of digital materials should be archived and preserved. These are not easy questions because of media instability and technological obsolescence. Stakeholders in communities with diverse interests compete in terms of which community or representative of a community has an authoritative say in what and how much get archived and preserved. In discussing the above-mentioned questions, the authors once again provide valuable information and lessons from a number of initiatives in Europe, Australia, and from other global initiatives. The Draft Charter on the Preservation of the Digital Heritage and the Guidelines for the Preservation of Digital Heritage, both published by UNESCO, are discussed and some of the preservation principles from the Guidelines are listed. The existing diversity in administrative arrangements for these new projects and resources notwithstanding, the impact on content produced for online reserves through work done in digital projects and from the use of metadata and the impact on levels of reference services and the ensuing need for different models to train users and staff is undeniable. In terms of education and training, formal coursework, continuing education, and informal and on-the-job training are just some of the available options. The intensity in resources required for cataloguing digital materials, the questions over the quality of digital resources, and the threat of the new digital environment to the survival of the traditional library are all issues quoted by critics and others, however, who are concerned about a balance for planning and resources allocated for traditional or print-based resources and newer digital resources. A number of questions are asked as part of the book's conclusions in Chapter 10. Of these questions, one that touches on all of the rest and upon much of the book's content is the question: What does the future hold for metadata in libraries? Metadata standards are alive and well in many communities of practice, as Chapters 2-6 have demonstrated. The usefulness of metadata continues to be high and innovation in various elements should keep information professionals engaged for decades to come. There is no doubt that metadata have had a tremendous impact in how we organize information for access and in terms of who, how, when, and where contact is made with library services and collections online. Planning and commitment to a diversity of metadata to serve the plethora of needs in communities of practice are paramount for the continued success of many digital projects and for online preservation of our digital heritage."
    LCSH
    Information organization
    Cataloging of electronic information resources
    Information storage and retrieval systems
    Electronic information resources / Management
    Series
    Library and information science text series
    Subject
    Information organization
    Cataloging of electronic information resources
    Information storage and retrieval systems
    Electronic information resources / Management
  12. Design and usability of digital libraries : case studies in the Asia-Pacific (2005) 0.00
    2.08005E-4 = product of:
      0.003120075 = sum of:
        0.003120075 = product of:
          0.00624015 = sum of:
            0.00624015 = weight(_text_:information in 93) [ClassicSimilarity], result of:
              0.00624015 = score(doc=93,freq=20.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.12266775 = fieldWeight in 93, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=93)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Even though each chapter is short, the entire book covers a vast amount of information. This book is meant to provide an introductory sampling of issues discovered through various case studies, not provide an in-depth report on each of them. The references included at the end of each chapter are particularly helpful because they lead to more information about issues that the particular case study raises. By including a list of references at the end of each chapter, the authors want to encourage interested readers to pursue more about the topics presented. This book clearly offers many opportunities to explore issues on the same topics further. The appendix at the end of the book also contains additional useful information that readers might want to consult if they are interested in finding out more about digital libraries. Selected resources are provided in the form of a list that includes such topics as journal special issues, digital library conference proceedings, and online databases. A key issue that this book brings up is how to include different cultural materials in digital libraries. For example, in chapter 16, the concerns and issues surrounding Maori heritage materials are introduced. The terms and concepts used when classifying Maori resources are so delicate that the meaning behind them can completely change with even a slight variation. Preserving other cultures correctly is important, and researchers need to consider the consequences of any errors made during digitization of resources. Another example illustrating the importance of including information about different cultures is presented in chapter 9. The authors talk about the various different languages used in the world and suggest ways to integrate them into information retrieval systems. As all digital library researchers know, the ideal system would allow all users to retrieve results in their own languages. The authors go on to discuss a few approaches that can be taken to assist with overcoming this challenge.
    Ultimately, the book emphasizes that universal access to a worldwide digital library is the common goal among all digital library designers. Being able to view the same information, no matter what format the material is in, is one of the next steps toward reaching this goal. This book also addresses various additional problems with designing and using digital libraries, such as pricing and costs, and the range of media types that currently exist. The writing styles differ from chapter to chapter because each is written by a different set of authors. In addition, the material in the chapters is presented quite diversely. For example, in chapter 5, the methodology section of the case study is explained in the form of mathematical equations, algorithms, and charts, and chapter 13 contains complex figures and diagrams, whereas on the other hand, chapter 16 is completely written in text. Although the different ways that the case studies are presented could be considered confusing to some, the entire book remains consistent and truly comes together as a whole because the chapters are organized so sensibly. Many figures, graphs, and tables are also provided throughout the chapters to guide readers visually. Particularly helpful are the sample screen shots of digital libraries in chapter 11. Here, readers can see exactly what would be seen when viewing a digital library catalog. In general, the language and style of the book are easy to understand, and any uncommon words and phrases are always clearly defined and explained. The authors mention that the book is primarily written for academics, college students, and practitioners who may want to learn more about the design and development of digital libraries. The authors do seem to target this audience because the language and writing style seem to be geared toward members of academia, although they may represent a wide variety of disciplines. As well, computer scientists and software developers who are interested in and have been researching digital libraries will find this book useful and applicable to their current research. In conclusion, this book provides a wide variation of case studies that prove to be informative to researchers interested in the development and future progress of digital libraries. In the information world that we live in today, digital libraries are becoming more and more prominent, and the case studies presented demonstrate that the vision for the future of digital libraries is to be able to include all types of materials, cultures, and languages within a single system. All in all, this book instills value to society and all members of the academic world can learn from it."
    Imprint
    Hershey, Pa. : Information Science Publ.
    LCSH
    Information storage and retrieval systems / Case studies
    Subject
    Information storage and retrieval systems / Case studies
  13. Linked data and user interaction : the road ahead (2015) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 2552) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=2552,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 2552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2552)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This collection of research papers provides extensive information on deploying services, concepts, and approaches for using open linked data from libraries and other cultural heritage institutions. With a special emphasis on how libraries and other cultural heritage institutions can create effective end user interfaces using open, linked data or other datasets. These papers are essential reading for any one interesting in user interface design or the semantic web.
  14. Kantardzic, M.: Data mining : concepts, models, methods, and algorithms (2003) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 2291) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=2291,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 2291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2291)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This book offers a comprehensive introduction to the exploding field of data mining. We are surrounded by data, numerical and otherwise, which must be analyzed and processed to convert it into information that informs, instructs, answers, or otherwise aids understanding and decision-making. Due to the ever-increasing complexity and size of today's data sets, a new term, data mining, was created to describe the indirect, automatic data analysis techniques that utilize more complex and sophisticated tools than those which analysts used in the past to do mere data analysis. "Data Mining: Concepts, Models, Methods, and Algorithms" discusses data mining principles and then describes representative state-of-the-art methods and algorithms originating from different disciplines such as statistics, machine learning, neural networks, fuzzy logic, and evolutionary computation. Detailed algorithms are provided with necessary explanations and illustrative examples. This text offers guidance: how and when to use a particular software tool (with their companion data sets) from among the hundreds offered when faced with a data set to mine. This allows analysts to create and perform their own data mining experiments using their knowledge of the methodologies and techniques provided. This book emphasizes the selection of appropriate methodologies and data analysis software, as well as parameter tuning. These critically important, qualitative decisions can only be made with the deeper understanding of parameter meaning and its role in the technique that is offered here. Data mining is an exploding field and this book offers much-needed guidance to selecting among the numerous analysis programs that are available.

Years

Languages

  • e 20
  • d 16

Types

Subjects

Classifications