Search (16 results, page 1 of 1)

  • × theme_ss:"Automatisches Indexieren"
  • × type_ss:"el"
  1. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.02
    0.020617396 = product of:
      0.072160885 = sum of:
        0.025709987 = weight(_text_:wide in 2596) [ClassicSimilarity], result of:
          0.025709987 = score(doc=2596,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.013948122 = weight(_text_:web in 2596) [ClassicSimilarity], result of:
          0.013948122 = score(doc=2596,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.14422815 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.005707573 = weight(_text_:information in 2596) [ClassicSimilarity], result of:
          0.005707573 = score(doc=2596,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.10971737 = fieldWeight in 2596, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.026795205 = weight(_text_:retrieval in 2596) [ClassicSimilarity], result of:
          0.026795205 = score(doc=2596,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.29892567 = fieldWeight in 2596, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
      0.2857143 = coord(4/14)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  2. Gábor, K.; Zargayouna, H.; Tellier, I.; Buscaldi, D.; Charnois, T.: ¬A typology of semantic relations dedicated to scientific literature analysis (2016) 0.02
    0.019365482 = product of:
      0.09037225 = sum of:
        0.044992477 = weight(_text_:wide in 2933) [ClassicSimilarity], result of:
          0.044992477 = score(doc=2933,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.342674 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
        0.024409214 = weight(_text_:web in 2933) [ClassicSimilarity], result of:
          0.024409214 = score(doc=2933,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
        0.020970564 = weight(_text_:retrieval in 2933) [ClassicSimilarity], result of:
          0.020970564 = score(doc=2933,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23394634 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
      0.21428572 = coord(3/14)
    
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  3. Daudaravicius, V.: ¬A framework for keyphrase extraction from scientific journals (2016) 0.01
    0.009914528 = product of:
      0.06940169 = sum of:
        0.044992477 = weight(_text_:wide in 2930) [ClassicSimilarity], result of:
          0.044992477 = score(doc=2930,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.342674 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.024409214 = weight(_text_:web in 2930) [ClassicSimilarity], result of:
          0.024409214 = score(doc=2930,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.14285715 = coord(2/14)
    
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
  4. Gödert, W.: Detecting multiword phrases in mathematical text corpora (2012) 0.00
    0.0045768693 = product of:
      0.032038085 = sum of:
        0.008071727 = weight(_text_:information in 466) [ClassicSimilarity], result of:
          0.008071727 = score(doc=466,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1551638 = fieldWeight in 466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=466)
        0.023966359 = weight(_text_:retrieval in 466) [ClassicSimilarity], result of:
          0.023966359 = score(doc=466,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.26736724 = fieldWeight in 466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=466)
      0.14285715 = coord(2/14)
    
    Abstract
    We present an approach for detecting multiword phrases in mathematical text corpora. The method used is based on characteristic features of mathematical terminology. It makes use of a software tool named Lingo which allows to identify words by means of previously defined dictionaries for specific word classes as adjectives, personal names or nouns. The detection of multiword groups is done algorithmically. Possible advantages of the method for indexing and information retrieval and conclusions for applying dictionary-based methods of automatic indexing instead of stemming procedures are discussed.
  5. Mongin, L.; Fu, Y.Y.; Mostafa, J.: Open Archives data Service prototype and automated subject indexing using D-Lib archive content as a testbed (2003) 0.00
    0.004501639 = product of:
      0.03151147 = sum of:
        0.013536699 = weight(_text_:information in 1167) [ClassicSimilarity], result of:
          0.013536699 = score(doc=1167,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2602176 = fieldWeight in 1167, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1167)
        0.01797477 = weight(_text_:retrieval in 1167) [ClassicSimilarity], result of:
          0.01797477 = score(doc=1167,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 1167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1167)
      0.14285715 = coord(2/14)
    
    Abstract
    The Indiana University School of Library and Information Science opened a new research laboratory in January 2003; The Indiana University School of Library and Information Science Information Processing Laboratory [IU IP Lab]. The purpose of the new laboratory is to facilitate collaboration between scientists in the department in the areas of information retrieval (IR) and information visualization (IV) research. The lab has several areas of focus. These include grid and cluster computing, and a standard Java-based software platform to support plug and play research datasets, a selection of standard IR modules and standard IV algorithms. Future development includes software to enable researchers to contribute datasets, IR algorithms, and visualization algorithms into the standard environment. We decided early on to use OAI-PMH as a resource discovery tool because it is consistent with our mission.
  6. Toepfer, M.; Seifert, C.: Content-based quality estimation for automatic subject indexing of short texts under precision and recall constraints 0.00
    0.0028605436 = product of:
      0.020023804 = sum of:
        0.0050448296 = weight(_text_:information in 4309) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=4309,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 4309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4309)
        0.014978974 = weight(_text_:retrieval in 4309) [ClassicSimilarity], result of:
          0.014978974 = score(doc=4309,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 4309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4309)
      0.14285715 = coord(2/14)
    
    Abstract
    Semantic annotations have to satisfy quality constraints to be useful for digital libraries, which is particularly challenging on large and diverse datasets. Confidence scores of multi-label classification methods typically refer only to the relevance of particular subjects, disregarding indicators of insufficient content representation at the document-level. Therefore, we propose a novel approach that detects documents rather than concepts where quality criteria are met. Our approach uses a deep, multi-layered regression architecture, which comprises a variety of content-based indicators. We evaluated multiple configurations using text collections from law and economics, where the available content is restricted to very short texts. Notably, we demonstrate that the proposed quality estimation technique can determine subsets of the previously unseen data where considerable gains in document-level recall can be achieved, while upholding precision at the same time. Hence, the approach effectively performs a filtering that ensures high data quality standards in operative information retrieval systems.
  7. Franke-Maier, M.; Beck, C.; Kasprzik, A.; Maas, J.F.; Pielmeier, S.; Wiesenmüller, H: ¬Ein Feuerwerk an Algorithmen und der Startschuss zur Bildung eines Kompetenznetzwerks für maschinelle Erschließung : Bericht zur Fachtagung Netzwerk maschinelle Erschließung an der Deutschen Nationalbibliothek am 10. und 11. Oktober 2019 (2020) 0.00
    0.002365089 = product of:
      0.033111244 = sum of:
        0.033111244 = weight(_text_:bibliothek in 5851) [ClassicSimilarity], result of:
          0.033111244 = score(doc=5851,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.27216077 = fieldWeight in 5851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.046875 = fieldNorm(doc=5851)
      0.071428575 = coord(1/14)
    
    Abstract
    Am 10. und 11. Oktober 2019 trafen sich rund 100 Vertreterinnen und Vertreter aus Bibliothek, Wissenschaft und Wirtschaft an der Deutschen Nationalbibliothek (DNB) in Frankfurt am Main zu einer Fachtagung über das derzeitige Trend-Thema "maschinelle Erschließung". Ziel der Veranstaltung war die "Betrachtung unterschiedlicher Anwendungsbereiche maschineller Textanalyse" sowie die Initiation eines Dialogs zu Technologien für die maschinelle Textanalyse, Aufgabenstellungen, Erfahrungen und den Herausforderungen, die maschinelle Verfahren nach sich ziehen. Hintergrund ist der Auftrag des Standardisierungsausschusses an die DNB, regelmäßig einschlägige Tagungen durchzuführen, aus denen "perspektivisch ein Kompetenznetzwerk für die maschinelle Erschließung entsteh[t]".
  8. Mao, J.; Xu, W.; Yang, Y.; Wang, J.; Yuille, A.L.: Explain images with multimodal recurrent neural networks (2014) 0.00
    0.001815726 = product of:
      0.025420163 = sum of:
        0.025420163 = weight(_text_:retrieval in 1557) [ClassicSimilarity], result of:
          0.025420163 = score(doc=1557,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.2835858 = fieldWeight in 1557, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1557)
      0.071428575 = coord(1/14)
    
    Abstract
    In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12 [8], Flickr 8K [28], and Flickr 30K [13]). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.
  9. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2012) 0.00
    0.0017435154 = product of:
      0.024409214 = sum of:
        0.024409214 = weight(_text_:web in 1717) [ClassicSimilarity], result of:
          0.024409214 = score(doc=1717,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 1717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1717)
      0.071428575 = coord(1/14)
    
    Content
    Beitrag für die Tagung: Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn. Vgl.: http://http://www.nlib.ee/index.php?id=17763.
  10. Karpathy, A.; Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions (2015) 0.00
    0.0012839122 = product of:
      0.01797477 = sum of:
        0.01797477 = weight(_text_:retrieval in 1868) [ClassicSimilarity], result of:
          0.01797477 = score(doc=1868,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 1868, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1868)
      0.071428575 = coord(1/14)
    
    Abstract
    We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate the effectiveness of our alignment model with ranking experiments on Flickr8K, Flickr30K and COCO datasets, where we substantially improve on the state of the art. We then show that the sentences created by our generative model outperform retrieval baselines on the three aforementioned datasets and a new dataset of region-level annotations.
  11. Donahue, J.; Hendricks, L.A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description (2014) 0.00
    0.0010699268 = product of:
      0.014978974 = sum of:
        0.014978974 = weight(_text_:retrieval in 1873) [ClassicSimilarity], result of:
          0.014978974 = score(doc=1873,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 1873, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1873)
      0.071428575 = coord(1/14)
    
    Abstract
    Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
  12. Mielke, B.: Wider einige gängige Ansichten zur juristischen Informationserschließung (2002) 0.00
    6.115257E-4 = product of:
      0.00856136 = sum of:
        0.00856136 = weight(_text_:information in 2145) [ClassicSimilarity], result of:
          0.00856136 = score(doc=2145,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 2145, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2145)
      0.071428575 = coord(1/14)
    
    Source
    Information und Mobilität: Optimierung und Vermeidung von Mobilität durch Information. Proceedings des 8. Internationalen Symposiums für Informationswissenschaft (ISI 2002), 7.-10.10.2002, Regensburg. Hrsg.: Rainer Hammwöhner, Christian Wolff, Christa Womser-Hacker
  13. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.00
    4.7796548E-4 = product of:
      0.0066915164 = sum of:
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
              0.020074548 = score(doc=3780,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 3780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3780)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    19. 8.2017 9:24:22
  14. Pielmeier, S.; Voß, V.; Carstensen, H.; Kahl, B.: Online-Workshop "Computerunterstützte Inhaltserschließung" 2020 (2021) 0.00
    4.32414E-4 = product of:
      0.0060537956 = sum of:
        0.0060537956 = weight(_text_:information in 4409) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=4409,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 4409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4409)
      0.071428575 = coord(1/14)
    
    Abstract
    Zum ersten Mal in digitaler Form und mit 230 Teilnehmer*innen fand am 11. und 12. November 2020 der 4. Workshop "Computerunterstützte Inhaltserschließung" statt, organisiert von der Deutschen Nationalbibliothek (DNB), der Firma Eurospider Information Technology, der Staatsbibliothek zu Berlin - Preußischer Kulturbesitz (SBB), der UB Stuttgart und dem Bibliotheksservice-Zentrum Baden-Württemberg (BSZ). Im Mittelpunkt stand der "Digitale Assistent DA-3": In elf Vorträgen wurden Anwendungsszenarien und Erfahrungen mit dem System vorgestellt, das Bibliotheken und andere Wissenschafts- und Kultureinrichtungen bei der Inhaltserschließung unterstützen soll. Die Begrüßung und Einführung in die beiden Workshop-Tage übernahm Frank Scholze (Generaldirektor der DNB). Er sieht den DA-3 als Baustein für die Verzahnung der intellektuellen und der maschinellen Erschließung.
  15. Beckmann, R.; Hinrichs, I.; Janßen, M.; Milmeister, G.; Schäuble, P.: ¬Der Digitale Assistent DA-3 : Eine Plattform für die Inhaltserschließung (2019) 0.00
    4.32414E-4 = product of:
      0.0060537956 = sum of:
        0.0060537956 = weight(_text_:information in 5408) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=5408,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 5408, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5408)
      0.071428575 = coord(1/14)
    
    Abstract
    Der "Digitale Assistent" DA-3 ist ein webbasiertes Tool zur maschinellen Unterstützung der intellektuellen verbalen und klassifikatorischen Inhaltserschließung. Im Frühjahr 2016 wurde einer breiteren Fachöffentlichkeit die zunächst für den Einsatz im IBS|BW-Konsortium konzipierte Vorgängerversion DA-2 vorgestellt. Die Community nahm die Entwicklung vor dem Hintergrund der strategischen Diskussionen um zukunftsfähige Verfahren der Inhaltserschließung mit großem Interesse auf. Inzwischen wird das Tool in einem auf drei Jahre angelegten Kooperationsprojekt zwischen der Firma Eurospider Information Technology, dem IBS|BW-Konsortium, der Staatsbibliothek zu Berlin und den beiden Verbundzentralen VZG und BSZ zu einem zentralen und leistungsstarken Service weiterentwickelt. Die ersten Anwenderbibliotheken in SWB und GBV setzen den DA-3 während dieser Projektphase bereits erfolgreich ein, am Ende ist die Überführung in den Routinebetrieb vorgesehen. Der Beitrag beschreibt den derzeitigen Stand und Nutzen des Projekts im Kontext der aktuellen Rahmenbedingungen, stellt ausführlich die Funktionalitäten des DA-3 vor, gibt einen kleinen Einblick hinter die Kulissen der Projektpartner und einen Ausblick auf kommende Entwicklungsschritte.
  16. Giesselbach, S.; Estler-Ziegler, T.: Dokumente schneller analysieren mit Künstlicher Intelligenz (2021) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 128) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=128,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 128, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=128)
      0.071428575 = coord(1/14)
    
    Footnote
    Vortrag im Rahmen des Berliner Arbeitskreis Information (BAK) am 25.02.2021.