Search (301 results, page 1 of 16)

  • × type_ss:"el"
  1. Schönherr, M.: Bestechend brillant : die Schönheit der Algorithmen (2016) 0.08
    0.080038294 = product of:
      0.16007659 = sum of:
        0.16007659 = sum of:
          0.08997261 = weight(_text_:software in 2762) [ClassicSimilarity], result of:
            0.08997261 = score(doc=2762,freq=2.0), product of:
              0.20527047 = queryWeight, product of:
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.051742528 = queryNorm
              0.43831247 = fieldWeight in 2762, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.078125 = fieldNorm(doc=2762)
          0.07010398 = weight(_text_:22 in 2762) [ClassicSimilarity], result of:
            0.07010398 = score(doc=2762,freq=2.0), product of:
              0.18119352 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051742528 = queryNorm
              0.38690117 = fieldWeight in 2762, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2762)
      0.5 = coord(1/2)
    
    Abstract
    Algorithmen sind Rezepte, aus denen Computerprogramme entstehen. Es gibt keine Software ohne Algorithmen. Je eleganter ein Algorithmus ist, desto weniger Rechenpower benötigt er, um zum Ziel zu kommen. Algorithmen können allerdings teuer sein - und die Nachfrage zur Lösung neuer und alter Probleme ist riesig.
    Date
    10. 2.2016 17:22:23
  2. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.068484046 = product of:
      0.13696809 = sum of:
        0.13696809 = product of:
          0.41090426 = sum of:
            0.41090426 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.41090426 = score(doc=1826,freq=2.0), product of:
                0.43867373 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051742528 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  3. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.07
    0.06778233 = product of:
      0.13556466 = sum of:
        0.13556466 = sum of:
          0.093502276 = weight(_text_:software in 4820) [ClassicSimilarity], result of:
            0.093502276 = score(doc=4820,freq=6.0), product of:
              0.20527047 = queryWeight, product of:
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.051742528 = queryNorm
              0.4555077 = fieldWeight in 4820, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.042062387 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
            0.042062387 = score(doc=4820,freq=2.0), product of:
              0.18119352 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051742528 = queryNorm
              0.23214069 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
      0.5 = coord(1/2)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  4. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.06
    0.06403063 = product of:
      0.12806126 = sum of:
        0.12806126 = sum of:
          0.071978085 = weight(_text_:software in 1490) [ClassicSimilarity], result of:
            0.071978085 = score(doc=1490,freq=2.0), product of:
              0.20527047 = queryWeight, product of:
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.051742528 = queryNorm
              0.35064998 = fieldWeight in 1490, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.0625 = fieldNorm(doc=1490)
          0.056083184 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
            0.056083184 = score(doc=1490,freq=2.0), product of:
              0.18119352 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051742528 = queryNorm
              0.30952093 = fieldWeight in 1490, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1490)
      0.5 = coord(1/2)
    
    Abstract
    Morphy ist ein frei verfügbares Softwarepaket für die morphologische Analyse und Synthese und die kontextsensitive Wortartenbestimmung des Deutschen. Die Verwendung der Software unterliegt keinen Beschränkungen. Da die Weiterentwicklung eingestellt worden ist, verwenden Sie Morphy as is, d.h. auf eigenes Risiko, ohne jegliche Haftung und Gewährleistung und vor allem ohne Support. Morphy ist nur für die Windows-Plattform verfügbar und nur auf Standalone-PCs lauffähig.
    Date
    22. 3.2015 9:30:24
  5. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.06
    0.056026805 = product of:
      0.11205361 = sum of:
        0.11205361 = sum of:
          0.06298082 = weight(_text_:software in 4324) [ClassicSimilarity], result of:
            0.06298082 = score(doc=4324,freq=2.0), product of:
              0.20527047 = queryWeight, product of:
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.051742528 = queryNorm
              0.30681872 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
          0.049072787 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
            0.049072787 = score(doc=4324,freq=2.0), product of:
              0.18119352 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051742528 = queryNorm
              0.2708308 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
      0.5 = coord(1/2)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:25
  6. Mühlbauer, P.: Upload in Computer klappt . (2018) 0.06
    0.056026805 = product of:
      0.11205361 = sum of:
        0.11205361 = sum of:
          0.06298082 = weight(_text_:software in 4113) [ClassicSimilarity], result of:
            0.06298082 = score(doc=4113,freq=2.0), product of:
              0.20527047 = queryWeight, product of:
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.051742528 = queryNorm
              0.30681872 = fieldWeight in 4113, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4113)
          0.049072787 = weight(_text_:22 in 4113) [ClassicSimilarity], result of:
            0.049072787 = score(doc=4113,freq=2.0), product of:
              0.18119352 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051742528 = queryNorm
              0.2708308 = fieldWeight in 4113, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4113)
      0.5 = coord(1/2)
    
    Abstract
    Den drei an der Technischen Universität in Wien forschenden Informatikern Mathias Lechner, Radu Grosu und Ramin Hasani ist es gelungen, das Nervensystem des Fadenwurm Caenorhabditis elegans (C. elegans) als Software in einen Computer zu übertragen und nachzuweisen, dass der "hochgeladene" virtuelle Wurm auf Reize genau so reagiert wie ein echter Nematodenwurm auf echte Reize in der Realität. Dafür ließen sie ihn eine Aufgabe bewältigen, die Hasani zufolge dem Balancieren eines Stabes ähnelt.
    Date
    12. 2.2018 15:22:19
  7. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.05
    0.054787233 = product of:
      0.10957447 = sum of:
        0.10957447 = product of:
          0.3287234 = sum of:
            0.3287234 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.3287234 = score(doc=230,freq=2.0), product of:
                0.43867373 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051742528 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  8. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.05
    0.048022978 = product of:
      0.096045956 = sum of:
        0.096045956 = sum of:
          0.053983565 = weight(_text_:software in 1289) [ClassicSimilarity], result of:
            0.053983565 = score(doc=1289,freq=2.0), product of:
              0.20527047 = queryWeight, product of:
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.051742528 = queryNorm
              0.2629875 = fieldWeight in 1289, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.046875 = fieldNorm(doc=1289)
          0.042062387 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
            0.042062387 = score(doc=1289,freq=2.0), product of:
              0.18119352 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051742528 = queryNorm
              0.23214069 = fieldWeight in 1289, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1289)
      0.5 = coord(1/2)
    
    Abstract
    QVIZ will research and create a framework for visualizing and querying archival resources by a time-space interface based on maps and emergent knowledge structures. The framework will also integrate social software, such as wikis, in order to utilize knowledge in existing and new communities of practice. QVIZ will lead to improved information sharing and knowledge creation, easier access to information in a user-adapted context and innovative ways of exploring and visualizing materials over time, between countries and other administrative units. The common European framework for sharing and accessing archival information provided by the QVIZ project will open a considerably larger commercial market based on archival materials as well as a richer understanding of European history.
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  9. Pro-Cite 2.0 for the IBM and Biblio-Link to USMARC comunications format records (1993) 0.05
    0.046751138 = product of:
      0.093502276 = sum of:
        0.093502276 = product of:
          0.18700455 = sum of:
            0.18700455 = weight(_text_:software in 5618) [ClassicSimilarity], result of:
              0.18700455 = score(doc=5618,freq=6.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.9110154 = fieldWeight in 5618, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5618)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Imprint
    Ann Arbor, MI 48106 : Personal Bibliographic Software, P.O. box 4250
    Issue
    [Software]
    Theme
    Bibliographische Software
  10. ¬The Software Toolworks multimedia encyclopedia (1992) 0.04
    0.044534173 = product of:
      0.089068346 = sum of:
        0.089068346 = product of:
          0.17813669 = sum of:
            0.17813669 = weight(_text_:software in 3599) [ClassicSimilarity], result of:
              0.17813669 = score(doc=3599,freq=4.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.8678145 = fieldWeight in 3599, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3599)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Imprint
    Novato, CA : Software Toolworks
  11. Dell'Orso, F.: Bibliography management software : with a detailed analysis of some packages (2008) 0.04
    0.044534173 = product of:
      0.089068346 = sum of:
        0.089068346 = product of:
          0.17813669 = sum of:
            0.17813669 = weight(_text_:software in 2373) [ClassicSimilarity], result of:
              0.17813669 = score(doc=2373,freq=4.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.8678145 = fieldWeight in 2373, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2373)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Bibliographische Software
  12. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.04
    0.040019147 = product of:
      0.080038294 = sum of:
        0.080038294 = sum of:
          0.044986304 = weight(_text_:software in 3628) [ClassicSimilarity], result of:
            0.044986304 = score(doc=3628,freq=2.0), product of:
              0.20527047 = queryWeight, product of:
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.051742528 = queryNorm
              0.21915624 = fieldWeight in 3628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3628)
          0.03505199 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
            0.03505199 = score(doc=3628,freq=2.0), product of:
              0.18119352 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051742528 = queryNorm
              0.19345059 = fieldWeight in 3628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3628)
      0.5 = coord(1/2)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  13. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.03
    0.034242023 = product of:
      0.068484046 = sum of:
        0.068484046 = product of:
          0.20545213 = sum of:
            0.20545213 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.20545213 = score(doc=4388,freq=2.0), product of:
                0.43867373 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051742528 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  14. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.034242023 = product of:
      0.068484046 = sum of:
        0.068484046 = product of:
          0.20545213 = sum of:
            0.20545213 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20545213 = score(doc=5669,freq=2.0), product of:
                0.43867373 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051742528 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  15. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.03
    0.032015316 = product of:
      0.06403063 = sum of:
        0.06403063 = sum of:
          0.035989042 = weight(_text_:software in 1163) [ClassicSimilarity], result of:
            0.035989042 = score(doc=1163,freq=2.0), product of:
              0.20527047 = queryWeight, product of:
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.051742528 = queryNorm
              0.17532499 = fieldWeight in 1163, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.03125 = fieldNorm(doc=1163)
          0.028041592 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
            0.028041592 = score(doc=1163,freq=2.0), product of:
              0.18119352 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051742528 = queryNorm
              0.15476047 = fieldWeight in 1163, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1163)
      0.5 = coord(1/2)
    
    Abstract
    This paper addresses the problem of information discovery in large collections of text. For users, one of the key problems in working with such collections is determining where to focus their attention. In selecting documents for examination, users must be able to formulate reasonably precise queries. Queries that are too broad will greatly reduce the efficiency of information discovery efforts by overwhelming the users with peripheral information. In order to formulate efficient queries, a mechanism is needed to automatically alert users regarding potentially interesting information contained within the collection. This paper presents the results of an experiment designed to test one approach to generation of such alerts. The technique of latent semantic indexing (LSI) is used to identify relationships among entities of interest. Entity extraction software is used to pre-process the text of the collection so that the LSI space contains representation vectors for named entities in addition to those for individual terms. In the LSI space, the cosine of the angle between the representation vectors for two entities captures important information regarding the degree of association of those two entities. For appropriate choices of entities, determining the entity pairs with the highest mutual cosine values yields valuable information regarding the contents of the text collection. The test database used for the experiment consists of 150,000 news articles. The proposed approach for alert generation is tested using a counterterrorism analysis example. The approach is shown to have significant potential for aiding users in rapidly focusing on information of potential importance in large text collections. The approach also has value in identifying possible use of aliases.
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  16. Gillitzer, B.: Yewno (2017) 0.03
    0.032015316 = product of:
      0.06403063 = sum of:
        0.06403063 = sum of:
          0.035989042 = weight(_text_:software in 3447) [ClassicSimilarity], result of:
            0.035989042 = score(doc=3447,freq=2.0), product of:
              0.20527047 = queryWeight, product of:
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.051742528 = queryNorm
              0.17532499 = fieldWeight in 3447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9671519 = idf(docFreq=2274, maxDocs=44218)
                0.03125 = fieldNorm(doc=3447)
          0.028041592 = weight(_text_:22 in 3447) [ClassicSimilarity], result of:
            0.028041592 = score(doc=3447,freq=2.0), product of:
              0.18119352 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051742528 = queryNorm
              0.15476047 = fieldWeight in 3447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3447)
      0.5 = coord(1/2)
    
    Abstract
    "Die Bayerische Staatsbibliothek testet den semantischen "Discovery Service" Yewno als zusätzliche thematische Suchmaschine für digitale Volltexte. Der Service ist unter folgendem Link erreichbar: https://www.bsb-muenchen.de/recherche-und-service/suchen-und-finden/yewno/. Das Identifizieren von Themen, um die es in einem Text geht, basiert bei Yewno alleine auf Methoden der künstlichen Intelligenz und des maschinellen Lernens. Dabei werden sie nicht - wie bei klassischen Katalogsystemen - einem Text als Ganzem zugeordnet, sondern der jeweiligen Textstelle. Die Eingabe eines Suchwortes bzw. Themas, bei Yewno "Konzept" genannt, führt umgehend zu einer grafischen Darstellung eines semantischen Netzwerks relevanter Konzepte und ihrer inhaltlichen Zusammenhänge. So ist ein Navigieren über thematische Beziehungen bis hin zu den Fundstellen im Text möglich, die dann in sogenannten Snippets angezeigt werden. In der Test-Anwendung der Bayerischen Staatsbibliothek durchsucht Yewno aktuell 40 Millionen englischsprachige Dokumente aus Publikationen namhafter Wissenschaftsverlage wie Cambridge University Press, Oxford University Press, Wiley, Sage und Springer, sowie Dokumente, die im Open Access verfügbar sind. Nach der dreimonatigen Testphase werden zunächst die Rückmeldungen der Nutzer ausgewertet. Ob und wann dann der Schritt von der klassischen Suchmaschine zum semantischen "Discovery Service" kommt und welche Bedeutung Anwendungen wie Yewno in diesem Zusammenhang einnehmen werden, ist heute noch nicht abzusehen. Die Software Yewno wurde vom gleichnamigen Startup in Zusammenarbeit mit der Stanford University entwickelt, mit der auch die Bayerische Staatsbibliothek eng kooperiert. [Inetbib-Posting vom 22.02.2017].
    Date
    22. 2.2017 10:16:49
  17. Scobel, G.: GPT: Eine Software, die die Welt verändert (2023) 0.03
    0.031810123 = product of:
      0.06362025 = sum of:
        0.06362025 = product of:
          0.1272405 = sum of:
            0.1272405 = weight(_text_:software in 839) [ClassicSimilarity], result of:
              0.1272405 = score(doc=839,freq=4.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.6198675 = fieldWeight in 839, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.078125 = fieldNorm(doc=839)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    GPT-3 ist eine jener Entwicklungen, die binnen weniger Monate an Einfluss und Reichweite zulegen. Die Software wird sich massiv auf Ökonomie und Gesellschaft auswirken.
  18. Pumuckl musiziert (1996) 0.03
    0.03149041 = product of:
      0.06298082 = sum of:
        0.06298082 = product of:
          0.12596165 = sum of:
            0.12596165 = weight(_text_:software in 5477) [ClassicSimilarity], result of:
              0.12596165 = score(doc=5477,freq=2.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.61363745 = fieldWeight in 5477, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5477)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Imprint
    ? : ESCAL Software
  19. Atzbach, R.: ¬Der Rechtschreibtrainer : Rechtschreibübungen und -spiele für die 5. bis 9. Klasse (1996) 0.03
    0.03149041 = product of:
      0.06298082 = sum of:
        0.06298082 = product of:
          0.12596165 = sum of:
            0.12596165 = weight(_text_:software in 5579) [ClassicSimilarity], result of:
              0.12596165 = score(doc=5579,freq=2.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.61363745 = fieldWeight in 5579, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5579)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Imprint
    Berlin : Cornelsen Software
  20. Koch, T.; Ardö, A.: Automatic classification of full-text HTML-documents from one specific subject area : DESIRE II D3.6a, Working Paper 2 (2000) 0.03
    0.031167427 = product of:
      0.062334854 = sum of:
        0.062334854 = product of:
          0.12466971 = sum of:
            0.12466971 = weight(_text_:software in 1667) [ClassicSimilarity], result of:
              0.12466971 = score(doc=1667,freq=6.0), product of:
                0.20527047 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.051742528 = queryNorm
                0.6073436 = fieldWeight in 1667, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1667)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    1 Introduction / 2 Method overview / 3 Ei thesaurus preprocessing / 4 Automatic classification process: 4.1 Matching -- 4.2 Weighting -- 4.3 Preparation for display / 5 Results of the classification process / 6 Evaluations / 7 Software / 8 Other applications / 9 Experiments with universal classification systems / References / Appendix A: Ei classification service: Software / Appendix B: Use of the classification software as subject filter in a WWW harvester.

Years

Languages

  • e 150
  • d 143
  • el 2
  • a 1
  • i 1
  • nl 1
  • More… Less…

Types

  • a 126
  • i 17
  • m 7
  • r 6
  • x 5
  • b 4
  • n 3
  • s 3
  • p 1
  • More… Less…