Search (5831 results, page 292 of 292)

  • × language_ss:"d"
  • × type_ss:"a"
  1. Öttl, S.; Streiff, D.; Stettler, N.; Studer, M.: Aufbau einer Testumgebung zur Ermittlung signifikanter Parameter bei der Ontologieabfrage (2010) 0.00
    0.0028656456 = product of:
      0.008596936 = sum of:
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 4257) [ClassicSimilarity], result of:
              0.025790809 = score(doc=4257,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 4257, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4257)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Der Einsatz von semantischen Technologien ist mittlerweile ein etabliertes Mittel zur Optimierung von Information-Retrieval-Systemen. Obwohl der Einsatz von Ontologien für verschiedene Anwendungsbereiche wie beispielsweise zur Query-Expansion (Bhogal et al. 2007), zur Strukturierung von Benutzeroberflächen bzw. zur Dialoggestaltung (z. B. Garcia & Sicilia 2003; Liu et al. 2005; Lopez et al. 2006; Paulheim 2009; Paulheim & Probst 2010), in Recommendersystemen (z. B. Taehee et al. 2006; Cantador et al. 2008; Middleton et al. 2001; Middleton et al. 2009) usw. rege erforscht wird, gibt es noch kaum Bestrebungen, die einzelnen Abfragemethodiken für Ontologien systematisch zu untersuchen. Bei der Abfrage von Ontologien geht es in erster Linie darum, Zusammenhänge zwischen Begriffen zu ermitteln, indem hierarchische (Classes und Individuals), semantische (Object Properties) und ergänzende (Datatype Properties) Beziehungen abgefragt oder logische Verknüpfungen abgeleitet werden. Hierbei werden sogenannte Reasoner für die Ableitungen und als Abfragesprache SPARQL (seltener auch XPath) eingesetzt. Ein weiterer, weniger oft eingesetzter, vielversprechender Ansatz findet sich bei Hoser et al. (2006) und Weng & Chang (2008), die Techniken der Sozialen Netzwerkanalyse zur Auswertung von Ontologien miteinsetzen (Semantic Network Analysis). Um die Abfrage von Ontologien sowie Kombinationen der unterschiedlichen Abfragemöglichkeiten systematisch untersuchen zu können, wurde am SII eine entsprechende Testumgebung entwickelt, die in diesem Beitrag genauer vorgestellt werden soll.
  2. Stempfhuber, M.; Zapilko, M.B.: ¬Ein Ebenenmodell für die semantische Integration von Primärdaten und Publikationen in Digitalen Bibliotheken (2013) 0.00
    0.0028656456 = product of:
      0.008596936 = sum of:
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 917) [ClassicSimilarity], result of:
              0.025790809 = score(doc=917,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 917, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=917)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Digitale Bibliotheken stehen derzeit vor der Herausforderung, den veränderten Informationsbedürfnissen ihrer wissenschaftlichen Nutzer nachzukommen und einen integrierten Zugriff auf verschiedene Informationsarten (z.B. Publikationen, Primärdaten, Wissenschaftler- und. Organisationsprofile, Forschungsprojektinformationen) zu bieten, die in zunehmenden Maße digital zur Verfügung stehen und diese in virtuellen Forschungsumgebungen verfügbar zu machen. Die daraus resultierende Herausforderungen struktureller und semantischer Heterogenität werden durch ein weites Feld von verschiedenen Metadaten-Standards, Inhaltserschließungsverfahren sowie Indexierungsansätze für verschiedene Arten von Information getragen. Bisher existiert jedoch kein allgemeingültiges, integrierendes Modell für Organisation und Retrieval von Wissen in Digitalen Bibliotheken. Dieser Beitrag stellt aktuelle Forschungsentwicklungen und -aktivitäten vor, die die Problematik der semantischen Interoperabilität in Digitalen Bibliotheken behandeln und präsentiert ein Modell für eine integrierte Suche in textuellen Daten (z.B. Publikationen) und Faktendaten (z.B. Primärdaten), das verschiedene Ansätze der aktuellen Forschung aufgreift und miteinander in Bezug setzt. Eingebettet in den Forschungszyklus treffen dabei traditionelle Inhaltserschließungsverfahren für Publikationen auf neuere ontologie-basierte Ansätze, die für die Repräsentation komplexerer Informationen und Zusammenhänge (z.B. in sozialwissenschaftlichen Umfragedaten) geeigneter scheinen. Die Vorteile des Modells sind (1) die einfache Wiederverwendbarkeit bestehender Wissensorganisationssysteme sowie (2) ein geringer Aufwand bei der Konzeptmodellierung durch Ontologien.
  3. Publishers go head-to-head over search tools : Elsevier's Scopus (2004) 0.00
    0.0028556061 = product of:
      0.008566818 = sum of:
        0.008566818 = product of:
          0.025700454 = sum of:
            0.025700454 = weight(_text_:online in 2496) [ClassicSimilarity], result of:
              0.025700454 = score(doc=2496,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16597117 = fieldWeight in 2496, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2496)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    "Will there ever be a science equivalent of Google? Two of the world's biggest science publishing and information firms seem to think that there will. They are about to compete head-to-head to create the most popular tool for searching the scientific literature. Elsevier, the Amsterdam-based publisher of more than 1,800 journals, has announced that this autumn it will launch Scopus, an online search engine covering abstracts and references from 14,000 scientific journals. Scopus will arrive as a direct competitor for the established Web of Science, owned by Thomson ISI of Philadelphia, the scientific information specialist. "Scopus will definitely be a threat to ISI," says one science publishing expert, who asked not to be named. "But ISI will not just let this happen. There will be some kind of arms race in terms of adding new features." Many researchers are already wedded to subject-specific databases of scientific information, such as PubMed, for biomedical research. But Web of Science is currently the only service to cover the full spectrum of scientific disciplines and publications. It can also generate the citation statistics that are sometimes used to measure the quality ofjournals and individual papers. ISI, which is widely used by libraries worldwide, may be hard to displace. It covers fewer than 9,000 journals, but it has been available in its present form since 1997 and includes a 60-year archive of papers. Thomson ISI says it will extend this to 105 years by the end of 2005. The company also owns the only extensive database an patent abstracts.
    Source
    Online Mitteilungen. 2004, Nr.79, S.17-18[=Mitteilungen VÖB 57(2004) H.2]
  4. Schürmann, H.: Software scannt Radio- und Fernsehsendungen : Recherche in Nachrichtenarchiven erleichtert (2001) 0.00
    0.00268834 = product of:
      0.00806502 = sum of:
        0.00806502 = product of:
          0.024195058 = sum of:
            0.024195058 = weight(_text_:22 in 5759) [ClassicSimilarity], result of:
              0.024195058 = score(doc=5759,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.1354154 = fieldWeight in 5759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5759)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Handelsblatt. Nr.79 vom 24.4.2001, S.22
  5. Feigenbaum, L.; Herman, I.; Hongsermeier, T.; Neumann, E.; Stephens, S.: ¬The Semantic Web in action (2007) 0.00
    0.002307678 = product of:
      0.006923034 = sum of:
        0.006923034 = product of:
          0.0207691 = sum of:
            0.0207691 = weight(_text_:online in 3000) [ClassicSimilarity], result of:
              0.0207691 = score(doc=3000,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13412495 = fieldWeight in 3000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3000)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Six years ago in this magazine, Tim Berners-Lee, James Hendler and Ora Lassila unveiled a nascent vision of the Semantic Web: a highly interconnected network of data that could be easily accessed and understood by any desktop or handheld machine. They painted a future of intelligent software agents that would head out on the World Wide Web and automatically book flights and hotels for our trips, update our medical records and give us a single, customized answer to a particular question without our having to search for information or pore through results. They also presented the young technologies that would make this vision come true: a common language for representing data that could be understood by all kinds of software agents; ontologies--sets of statements--that translate information from disparate databases into common terms; and rules that allow software agents to reason about the information described in those terms. The data format, ontologies and reasoning software would operate like one big application on the World Wide Web, analyzing all the raw data stored in online databases as well as all the data about the text, images, video and communications the Web contained. Like the Web itself, the Semantic Web would grow in a grassroots fashion, only this time aided by working groups within the World Wide Web Consortium, which helps to advance the global medium. Since then skeptics have said the Semantic Web would be too difficult for people to understand or exploit. Not so. The enabling technologies have come of age. A vibrant community of early adopters has agreed on standards that have steadily made the Semantic Web practical to use. Large companies have major projects under way that will greatly improve the efficiencies of in-house operations and of scientific research. Other firms are using the Semantic Web to enhance business-to-business interactions and to build the hidden data-processing structures, or back ends, behind new consumer services. And like an iceberg, the tip of this large body of work is emerging in direct consumer applications, too.
  6. Summann, F.; Wolf, S.: Suchmaschinentechnologie und wissenschaftliche Suchumgebung : Warum braucht man eine wissenschaftliche Suchmaschine? (2006) 0.00
    0.002307678 = product of:
      0.006923034 = sum of:
        0.006923034 = product of:
          0.0207691 = sum of:
            0.0207691 = weight(_text_:online in 5958) [ClassicSimilarity], result of:
              0.0207691 = score(doc=5958,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13412495 = fieldWeight in 5958, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5958)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Online Mitteilungen. 2006, Nr.86, S.3-18 [=Mitteilungen VÖB 59(2006) H.2]
  7. Sandner, M.: Entwicklung der SWD-Arbeit in Österreich (2008) 0.00
    0.002307678 = product of:
      0.006923034 = sum of:
        0.006923034 = product of:
          0.0207691 = sum of:
            0.0207691 = weight(_text_:online in 2188) [ClassicSimilarity], result of:
              0.0207691 = score(doc=2188,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13412495 = fieldWeight in 2188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2188)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This article focuses on the use of the German language subject headings authority file SWD (Schlagwortnormdatei) in Austria and outlines how Austrian academic libraries' employment of the SWD developed in active cooperation with their SWD partners. The Austrian subject indexing practice turned to the SWD terminology based on the newly published German subject indexing rules RSWK (Regeln für den Schlagwortkatalog) in the late 1980s. An electronic workflow was developed. Soon it became necessary to provide a data pool for new terms originally created by Austrian member libraries and to connect these data with the SWD source data (ÖSWD, 1991). Internal cooperation structures developed when local SWD editorial departments began to exist. As of 1994 a central editor was nominated to serve as direct link between active Austrian SWD users and SWD partners and the German National Library (DNB). Unfortunately the first active SWD period was followed by a long term vacancy due to the first central editor's early retirement. Nearly all functional and information structures stopped functioning while local data increased on a daily basis... In 2004 a new central ÖSWD editor was nominated, whose first task it was to rebuild structures, to motivate local editors as well as terminology experts in Austria, to create a communication network for exchanging information and to cooperate efficiently with the DNB and Austria's SWD partners. The great number of old data and term duplicates and the special role of personal names as subject authority data in the Austrian library system meant that newly created and older or reused terms had to be marked in a special way to allow for better segmentation and revision. Now, in 2008, the future of Austrian SWD use looks bright. Problems will continue to be overcome as the forthcoming new online editing process for authority files provides new challenges.
  8. Luetzow, G.: Jeder googelt jeden : Analyse (2004) 0.00
    0.0023042914 = product of:
      0.0069128736 = sum of:
        0.0069128736 = product of:
          0.02073862 = sum of:
            0.02073862 = weight(_text_:22 in 2599) [ClassicSimilarity], result of:
              0.02073862 = score(doc=2599,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.116070345 = fieldWeight in 2599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2599)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    17. 7.1996 9:33:22
  9. Lenzen, M.: Vor der Quadratwurzel steht die Quadratzahl (2015) 0.00
    0.0020192184 = product of:
      0.006057655 = sum of:
        0.006057655 = product of:
          0.018172964 = sum of:
            0.018172964 = weight(_text_:online in 1647) [ClassicSimilarity], result of:
              0.018172964 = score(doc=1647,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.11735933 = fieldWeight in 1647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1647)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    http://www.faz.net/aktuell/feuilleton/forschung-und-lehre/uni-mainz-stellt-publikationen-von-hirnforschern-online-13379697.html
  10. Joint INIS/ETDE Thesaurus (Rev. 2) April 2007 (2007) 0.00
    0.002005952 = product of:
      0.0060178554 = sum of:
        0.0060178554 = product of:
          0.018053565 = sum of:
            0.018053565 = weight(_text_:retrieval in 644) [ClassicSimilarity], result of:
              0.018053565 = score(doc=644,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.11697317 = fieldWeight in 644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=644)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    The latest version of the Joint INIS/ETDEThesaurus has been published, English on paper, Multilingual (Arabic-Chinese-English-French-GermanRussian-Spanish) on CD: Joint Thesaurus Part I (A-L) and Part II (M-Z) ETDE/INIS Joint Reference Series No. 1 (Rev. 2) The ETDE/INIS Joint Thesaurus (Rev. 2) contains the controlled terminology for indexing all information within the subject scope of INIS and the Energy Technology Data Exchange (ETDE). The terminology is used in subject descriptions for input to, or retrieval of, information in these systems. The Joint Thesaurus is the result of continued editing in parallel to the processing of the INIS and ETDE databases. With updates to September 2006 Rev. 2 includes 21147 valid descriptors and 9114 forbidden terms. IAEA-ETDE/INIS-1 (Rev. 2), 1221 p., 2007, ISBN 92-0-102207-7, English. 120.00 Euro. Date of Issue: 10 May 2007.
  11. Böttger, D.: Mit den eigenen Fotos Geld verdienen : Hobbyfotografen können ihre Bilder über Microstock-Agenturen verkaufen (2011) 0.00
    0.0017307586 = product of:
      0.0051922756 = sum of:
        0.0051922756 = product of:
          0.015576826 = sum of:
            0.015576826 = weight(_text_:online in 4178) [ClassicSimilarity], result of:
              0.015576826 = score(doc=4178,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.100593716 = fieldWeight in 4178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4178)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    "Auf so mancher Festplatte schlummern wahre Schätze. Denn das ein oder andere Urlaubsfoto kann mit Hilfe so genannter Microstock-Agenturen in bare Münze verwandelt werden. Dabei muss es nicht zwangsläufig ein Palmenstrand sein, auch Aufnahmen vom letzten Waldspaziergang lassen sich verkaufen. Echte Verkaufsschlager sind symbolische Bilder, die sich möglichst universell einsetzen lassen, wie eine Tasse dampfenden Kaffees oder Finger auf einer Computertastatur beziehungsweise Aufnahmen von Personen in allen Lebenslagen. Viele solcher Motive finden sich bereits irgendwo auf einem Datenträger und warten nur darauf, verkauft zu werden. Bilder selbstkritisch betrachten Dies übernehmen so genannte Microstock-Agenturen. Gegenüber einer klassischen Bildagentur bieten Microstock-Agenturen ihre Bilddaten zu günstigen Konditionen über Online-Plattformen an. Bilder in kleiner Internetauflösung gibt es bereits ab einem Euro. Je größer das jeweilige Bild dann bestellt wird, desto höher der Preis. Diese Preismodelle setzen vor allem auf Masse und weniger auf Exklusivität. Das bedeutet auch, dass jeder Interessierte seine Bilder einer Microstock-Agentur anbieten kann. In der Regel sind dazu erstmal drei bis fünf digitale Dateien notwendig, die zeigen, dass die technischen Aspekte und somit die Bildqualität stimmt. Vier Megapixel Auflösung sollten die Dateien mindestens bieten und am besten mit einer hochwertigen Kompaktkamera oder noch besser mit einer digitalen Spiegelreflex fotografiert werden. Je mehr Megapixel die Datei bietet, umso besser, denn dann kann das Bild auch in größeren Formaten zu höheren Preisen verkauft werden. Kleine Dateien mit Hilfe von Bildbearbeitungsprogrammen künstlich hochzurechnen bringt keinen Vorteil, da darunter die Bildqualität erheblich leidet.

Types

  • el 297
  • b 4
  • p 1
  • More… Less…

Themes