Search (7 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Internet"
  • × theme_ss:"Metadaten"
  • × year_i:[2000 TO 2010}
  1. Schroeder, K.: Persistent Identifiers im Kontext der Langzeitarchivierung : EPICUR auf dem 2. Bibliothekskongress in Leipzig (2004) 0.00
    0.0014528375 = product of:
      0.02179256 = sum of:
        0.02179256 = weight(_text_:und in 2787) [ClassicSimilarity], result of:
          0.02179256 = score(doc=2787,freq=24.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.33931053 = fieldWeight in 2787, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=2787)
      0.06666667 = coord(1/15)
    
    Abstract
    Mit elektronischem Publizieren werden folgende Eigenschaften verbunden: »schnell, kostengünstig, weltweit«. Aber ist das aus Nutzersicht bzw. aus der Perspektive der Autoren ausreichend, um eine Online-Veröffentlichung dauerhaft zu nutzen und zuverlässig zu zitieren? Ein Mechanismus, mit dem netzbasierte Publikationen eindeutig gekennzeichnet werden und jederzeit auffindbar sind, wird durch flüchtige Uniform Resource Locator (URLs) nicht bereitgestellt. Eine Lösung bieten Persistent Identifiers (Pls), wie z. B. Uniform Resource Names (URN)". Damit die Anwendung eines persistenten Adressierungsschemas wie den URNs langfristig gewährleistet werden kann, muss eine Infrastruktur mit einer institutionellen Unterstützung geschaffen werden. Ein wesentlicher Aspekt in diesem Kontext ist die Langzeitarchivierung der digitalen Objekte. Die Darstellung und Erläuterung der Schnittstellen zwischen Langzeitarchivierung und Pls sowie der damit verbundenen Aktivitäten und Ergebnisse des EPICUR-Projektes war Gegenstand des Vortrages von Kathrin Schroeder auf dem diesjährigen z. Bibliothekskongress in Leipzig im Rahmen des Workshops »Technische Aspekte der Langzeitarchivierung«". Es besteht ein enger Zusammenhang zwischen den Bereichen Pls (standortunabhängige, eindeutige Bezeichner für digitale Objekte) und Langzeitarchivierung (Maßnahmen, die dazu dienen, digitale Objekte für die Nachwelt dauerhaft zu erhalten): Pls werden als stabiler Zugriffsmechanismus für digitale Objekte verwendet, die in einem Depotsystem archiviert werden. Ein Depotsystem ist ein »( ...) Archiv für digitale Objekte, in dem Menschen und Systeme als 'Organisation' mit der Aufgabenstellung zusammenwirken, Informationen zu erhalten und einer definierten Nutzerschaft verfügbar zu machen.« Dazu gehören im erweiterten Sinne auch eine Infrastruktur vor der Eingangsschnittstelle des Depotsystems, die zum Transfer digitaler Objekte von den Produzenten in das Archiv dient, und die Infrastruktur der Endnutzer-Umgebungen hinter der Auslieferungsschnittstelle des Depotsystems, in denen die digitalen Objekte benutzt werden sollen. In diesem Umfeld werden Pls in folgenden Bereichen angewendet: - Metadaten, - Datenaustauschformate, - Automatisierte Lieferungen von Objekten in ein Archivsystem, - Depotsystem und - Nutzung von Pls als stabiler Zugriffsmechanismus auf ein Objekt als wichtigster Aspekt für den Endnutzer (Wissenschaftler und Autoren). Im Folgenden werden zu den einzelnen Bereichen die Ergebnisse des EPICUR-Projektes und die Aktivitäten Der Deutschen Bibliothek diskutiert.
  2. Özel, S.A.; Altingövde, I.S.; Ulusoy, Ö.; Özsoyoglu, G.; Özsoyoglu, Z.M.: Metadata-Based Modeling of Information Resources an the Web (2004) 0.00
    5.200125E-4 = product of:
      0.0078001874 = sum of:
        0.0078001874 = product of:
          0.015600375 = sum of:
            0.015600375 = weight(_text_:information in 2093) [ClassicSimilarity], result of:
              0.015600375 = score(doc=2093,freq=20.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.30666938 = fieldWeight in 2093, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2093)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This paper deals with the problem of modeling Web information resources using expert knowledge and personalized user information for improved Web searching capabilities. We propose a "Web information space" model, which is composed of Web-based information resources (HTML/XML [Hypertext Markup Language/Extensible Markup Language] documents an the Web), expert advice repositories (domain-expert-specified metadata for information resources), and personalized information about users (captured as user profiles that indicate users' preferences about experts as well as users' knowledge about topics). Expert advice, the heart of the Web information space model, is specified using topics and relationships among topics (called metalinks), along the lines of the recently proposed topic maps. Topics and metalinks constitute metadata that describe the contents of the underlying HTML/XML Web resources. The metadata specification process is semiautomated, and it exploits XML DTDs (Document Type Definition) to allow domain-expert guided mapping of DTD elements to topics and metalinks. The expert advice is stored in an object-relational database management system (DBMS). To demonstrate the practicality and usability of the proposed Web information space model, we created a prototype expert advice repository of more than one million topics/metalinks for DBLP (Database and Logic Programming) Bibliography data set. We also present a query interface that provides sophisticated querying fa cilities for DBLP Bibliography resources using the expert advice repository.
    Source
    Journal of the American Society for Information Science and technology. 55(2004) no.2, S.97-110
  3. Crowston, K.; Kwasnik, B.H.: Can document-genre metadata improve information access to large digital collections? (2004) 0.00
    4.3507366E-4 = product of:
      0.0065261046 = sum of:
        0.0065261046 = product of:
          0.013052209 = sum of:
            0.013052209 = weight(_text_:information in 824) [ClassicSimilarity], result of:
              0.013052209 = score(doc=824,freq=14.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.256578 = fieldWeight in 824, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=824)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    We discuss the issues of resolving the information-retrieval problem in large digital collections through the identification and use of document genres. Explicit identification of genre seems particularly important for such collections because any search usually retrieves documents with a diversity of genres that are undifferentiated by obvious clues as to their identity. Also, because most genres are characterized by both form and purpose, identifying the genre of a document provides information as to the document's purpose and its fit to the user's situation, which can be otherwise difficult to assess. We begin by outlining the possible role of genre identification in the information-retrieval process. Our assumption is that genre identification would enhance searching, first because we know that topic alone is not enough to define an information problem and, second, because search results containing genre information would be more easily understandable. Next, we discuss how information professionals have traditionally tackled the issues of representing genre in settings where topical representation is the norm. Finally, we address the issues of studying the efficacy of identifying genre in large digital collections. Because genre is often an implicit notion, studying it in a systematic way presents many problems. We outline a research protocol that would provide guidance for identifying Web document genres, for observing how genre is used in searching and evaluating search results, and finally for representing and visualizing genres.
  4. Howarth, L.C.: Metadata schemes for subject gateways (2003) 0.00
    3.9466174E-4 = product of:
      0.005919926 = sum of:
        0.005919926 = product of:
          0.011839852 = sum of:
            0.011839852 = weight(_text_:information in 1747) [ClassicSimilarity], result of:
              0.011839852 = score(doc=1747,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23274569 = fieldWeight in 1747, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1747)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Theme
    Information Gateway
  5. Hickey, T.R.: CORC : a system for gateway creation (2000) 0.00
    3.255793E-4 = product of:
      0.0048836893 = sum of:
        0.0048836893 = product of:
          0.009767379 = sum of:
            0.009767379 = weight(_text_:information in 4870) [ClassicSimilarity], result of:
              0.009767379 = score(doc=4870,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1920054 = fieldWeight in 4870, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4870)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Online information review. 24(2000) no.1, S.49-53
    Theme
    Information Gateway
  6. Aldana, J.F.; Gómez, A.C.; Moreno, N.; Nebro, A.J.; Roldán, M.M.: Metadata functionality for semantic Web integration (2003) 0.00
    2.941635E-4 = product of:
      0.004412452 = sum of:
        0.004412452 = product of:
          0.008824904 = sum of:
            0.008824904 = weight(_text_:information in 2731) [ClassicSimilarity], result of:
              0.008824904 = score(doc=2731,freq=10.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1734784 = fieldWeight in 2731, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2731)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    We propose an extension of a mediator architecture. This extension is oriented to ontology-driven data integration. In our architecture ontologies are not managed by an extemal component or service, but are integrated in the mediation layer. This approach implies rethinking the mediator design, but at the same time provides advantages from a database perspective. Some of these advantages include the application of optimization and evaluation techniques that use and combine information from all abstraction levels (physical schema, logical schema and semantic information defined by ontology). 1. Introduction Although the Web is probably the richest information repository in human history, users cannot specify what they want from it. Two major problems that arise in current search engines (Heflin, 2001) are: a) polysemy, when the same word is used with different meanings; b) synonymy, when two different words have the same meaning. Polysemy causes irrelevant information retrieval. On the other hand, synonymy produces lost of useful documents. The lack of a capability to understand the context of the words and the relationships among required terms, explains many of the lost and false results produced by search engines. The Semantic Web will bring structure to the meaningful content of Web pages, giving semantic relationships among terms and possibly avoiding the previous problems. Various proposals have appeared for meta-data representation and communication standards, and other services and tools that may eventually merge into the global Semantic Web (Berners-lee, 2001). Hopefully, in the next few years we will see the universal adoption of open standards for representation and sharing of meta-information. In this environment, software agents roaming from page to page can readily carry out sophisticated tasks for users (Berners-Lee, 2001). In this context, ontologies can be seen as metadata that represent semantic of data; providing a knowledge domain standard vocabulary, like DTDs and XML Schema do. If its pages were so structured, the Web could be seen as a heterogeneous collection of autonomous databases. This suggests that techniques developed in the Database area could be useful. Database research mainly deals with efficient storage and retrieval and with powerful query languages.
  7. Hunter, J.L.: ¬A survey of metadata research for organizing the Web (2004) 0.00
    2.6310782E-4 = product of:
      0.0039466172 = sum of:
        0.0039466172 = product of:
          0.0078932345 = sum of:
            0.0078932345 = weight(_text_:information in 2117) [ClassicSimilarity], result of:
              0.0078932345 = score(doc=2117,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1551638 = fieldWeight in 2117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2117)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This article attempts to provide an overview of the key metadata research issues and the current projects and initiatives that are investigating methods and developing technologies aimed at improving our ability to discover, access, retrieve, and assimilate information on the Internet through the use of metadata.