Search (2199 results, page 3 of 110)

  • × year_i:[2010 TO 2020}
  1. Belém, F.M.; Almeida, J.M.; Gonçalves, M.A.: ¬A survey on tag recommendation methods : a review (2017) 0.07
    0.07275806 = product of:
      0.21827418 = sum of:
        0.068615906 = weight(_text_:tagging in 3524) [ClassicSimilarity], result of:
          0.068615906 = score(doc=3524,freq=2.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.326146 = fieldWeight in 3524, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.029650755 = weight(_text_:web in 3524) [ClassicSimilarity], result of:
          0.029650755 = score(doc=3524,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 3524, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.029650755 = weight(_text_:web in 3524) [ClassicSimilarity], result of:
          0.029650755 = score(doc=3524,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 3524, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.09035677 = sum of:
          0.06621657 = weight(_text_:2.0 in 3524) [ClassicSimilarity], result of:
            0.06621657 = score(doc=3524,freq=2.0), product of:
              0.20667298 = queryWeight, product of:
                5.799733 = idf(docFreq=363, maxDocs=44218)
                0.035634913 = queryNorm
              0.320393 = fieldWeight in 3524, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.799733 = idf(docFreq=363, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3524)
          0.024140194 = weight(_text_:22 in 3524) [ClassicSimilarity], result of:
            0.024140194 = score(doc=3524,freq=2.0), product of:
              0.12478739 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.035634913 = queryNorm
              0.19345059 = fieldWeight in 3524, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3524)
      0.33333334 = coord(4/12)
    
    Abstract
    Tags (keywords freely assigned by users to describe web content) have become highly popular on Web 2.0 applications, because of the strong stimuli and easiness for users to create and describe their own content. This increase in tag popularity has led to a vast literature on tag recommendation methods. These methods aim at assisting users in the tagging process, possibly increasing the quality of the generated tags and, consequently, improving the quality of the information retrieval (IR) services that rely on tags as data sources. Regardless of the numerous and diversified previous studies on tag recommendation, to our knowledge, no previous work has summarized and organized them into a single survey article. In this article, we propose a taxonomy for tag recommendation methods, classifying them according to the target of the recommendations, their objectives, exploited data sources, and underlying techniques. Moreover, we provide a critical overview of these methods, pointing out their advantages and disadvantages. Finally, we describe the main open challenges related to the field, such as tag ambiguity, cold start, and evaluation issues.
    Date
    16.11.2017 13:30:22
  2. Konkova, E.; Göker, A.; Butterworth, R.; MacFarlane, A.: Social tagging: exploring the image, the tags, and the game (2014) 0.07
    0.072252646 = product of:
      0.28901058 = sum of:
        0.21784876 = weight(_text_:tagging in 1370) [ClassicSimilarity], result of:
          0.21784876 = score(doc=1370,freq=14.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            1.0354816 = fieldWeight in 1370, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=1370)
        0.035580907 = weight(_text_:web in 1370) [ClassicSimilarity], result of:
          0.035580907 = score(doc=1370,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3059541 = fieldWeight in 1370, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1370)
        0.035580907 = weight(_text_:web in 1370) [ClassicSimilarity], result of:
          0.035580907 = score(doc=1370,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3059541 = fieldWeight in 1370, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1370)
      0.25 = coord(3/12)
    
    Abstract
    Large image collections on the Web need to be organized for effective retrieval. Metadata has a key role in image retrieval but rely on professionally assigned tags which is not a viable option. Current content-based image retrieval systems have not demonstrated sufficient utility on large-scale image sources on the web, and are usually used as a supplement to existing text-based image retrieval systems. We present two social tagging alternatives in the form of photo-sharing networks and image labeling games. Here we analyze these applications to evaluate their usefulness from the semantic point of view, investigating the management of social tagging for indexing. The findings of the study have shown that social tagging can generate a sizeable number of tags that can be classified as in terpretive for an image, and that tagging behaviour has a manageable and adjustable nature depending on tagging guidelines.
    Theme
    Social tagging
  3. Qualman, E.: Socialnomics : wie Social-Media Wirtschaft und Gesellschaft verändern (2010) 0.07
    0.07174511 = product of:
      0.17218828 = sum of:
        0.02905169 = weight(_text_:web in 3588) [ClassicSimilarity], result of:
          0.02905169 = score(doc=3588,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.24981049 = fieldWeight in 3588, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3588)
        0.032903954 = weight(_text_:world in 3588) [ClassicSimilarity], result of:
          0.032903954 = score(doc=3588,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.24022943 = fieldWeight in 3588, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=3588)
        0.043723192 = weight(_text_:wide in 3588) [ClassicSimilarity], result of:
          0.043723192 = score(doc=3588,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.2769224 = fieldWeight in 3588, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3588)
        0.02905169 = weight(_text_:web in 3588) [ClassicSimilarity], result of:
          0.02905169 = score(doc=3588,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.24981049 = fieldWeight in 3588, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3588)
        0.037457753 = product of:
          0.074915506 = sum of:
            0.074915506 = weight(_text_:2.0 in 3588) [ClassicSimilarity], result of:
              0.074915506 = score(doc=3588,freq=4.0), product of:
                0.20667298 = queryWeight, product of:
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.035634913 = queryNorm
                0.36248332 = fieldWeight in 3588, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3588)
          0.5 = coord(1/2)
      0.41666666 = coord(5/12)
    
    Footnote
    Rez. in Mitt. VÖB 63(2010) H.1/2, S.148-149 (M. Buzinkay): "Endlich wieder ein Buchtitel, der mich nicht nur gleich angesprochen, sondern auch das gehalten, was er versprochen hat. Eine vertiefende Lektüre in ein sehr aktuelles und sehr wichtiges Thema, dass sowohl Einzelpersonen wie auch Organisationen, Unternehmen und Vereine gleichermaßen beschäftigen muss: "Wie Social Media Wirtschaft und Gesellschaft verändern" heißt der Untertitel des Werkes von Erik Qualman. Der Autor liefert für seine Behauptungen ausgesuchte Beispiele, diese seine Argumentation untermauern. Schön ist, dass man diese Beispiele gleich als Hands-on Tipps für seine eigene Online Arbeit nutzen kann. Bei der schier unendlichen Anzahl von Beispielen muss man sich aber fragen, ob man je in der Lage sein wird, diese nützlichen Hinweise jemals nur annähernd im eigenen Unternehmen umzusetzen. Um es kurz zu fassen: man kann das Buch mit ins Bett nehmen und in einem durchlesen. Fad wird einem nicht, nur genug Post-its sollte man mitnehmen, um alles wichtige zu markieren und zu notieren. Am nächsten Morgen sollte man das Buch aber sein lassen, denn das Geheimnis von Socialnomics ist die Umsetzung im Web. Eine dringende Empfehlung an alle Marketing-Interessierten!"
    RSWK
    Unternehmen / World Wide Web 2.0 / Marketing (BVB)
    Subject
    Unternehmen / World Wide Web 2.0 / Marketing (BVB)
  4. Hänger, C.; Krätzsch, C.; Niemann, C.: Was vom Tagging übrig blieb : Erkenntnisse und Einsichten aus zwei Jahren Projektarbeit (2011) 0.07
    0.07132681 = product of:
      0.2139804 = sum of:
        0.16091856 = weight(_text_:tagging in 4519) [ClassicSimilarity], result of:
          0.16091856 = score(doc=4519,freq=44.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.7648802 = fieldWeight in 4519, product of:
              6.6332498 = tf(freq=44.0), with freq of:
                44.0 = termFreq=44.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4519)
        0.014825378 = weight(_text_:web in 4519) [ClassicSimilarity], result of:
          0.014825378 = score(doc=4519,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.12748088 = fieldWeight in 4519, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4519)
        0.014825378 = weight(_text_:web in 4519) [ClassicSimilarity], result of:
          0.014825378 = score(doc=4519,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.12748088 = fieldWeight in 4519, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4519)
        0.023411095 = product of:
          0.04682219 = sum of:
            0.04682219 = weight(_text_:2.0 in 4519) [ClassicSimilarity], result of:
              0.04682219 = score(doc=4519,freq=4.0), product of:
                0.20667298 = queryWeight, product of:
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.035634913 = queryNorm
                0.22655207 = fieldWeight in 4519, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4519)
          0.5 = coord(1/2)
      0.33333334 = coord(4/12)
    
    Abstract
    Das DFG-Projekt "Collaborative Tagging als neue Form der Sacherschließung" Im Oktober 2008 startete an der Universitätsbibliothek Mannheim das DFG-Projekt "Collaborative Tagging als neue Form der Sacherschließung". Über zwei Jahre hinweg wurde untersucht, welchen Beitrag das Web-2.0-Phänomen Tagging für die inhaltliche Erschließung von bisher nicht erschlossenen und somit der Nutzung kaum zugänglichen Dokumenten leisten kann. Die freie Vergabe von Schlagwörtern in Datenbanken durch die Nutzer selbst hatte sich bereits auf vielen Plattformen als äußerst effizient herausgestellt, insbesondere bei Inhalten, die einer automatischen Erschließung nicht zugänglich sind. So wurden riesige Mengen von Bildern (FlickR), Filmen (YouTube) oder Musik (LastFM) durch das Tagging recherchierbar und identifizierbar gemacht. Die Fragestellung des Projektes war entsprechend, ob und in welcher Qualität sich durch das gleiche Verfahren beispielsweise Dokumente auf Volltextservern oder in elektronischen Zeitschriften erschließen lassen. Für die Beantwortung dieser Frage, die ggf. weitreichende Konsequenzen für die Sacherschließung durch Fachreferenten haben konnte, wurde ein ganzer Komplex von Teilfragen und Teilschritten ermittelt bzw. konzipiert. Im Kern ging es aber in allen Untersuchungsschritten immer um zwei zentrale Dimensionen, nämlich um die "Akzeptanz" und um die "Qualität" des Taggings. Die Akzeptanz des Taggings wurde zunächst bei den Studierenden und Wissenschaftlern der Universität Mannheim evaluiert. Für bestimmte Zeiträume wurden Tagging-Systeme in unterschiedlichen Ausprägungen an die Recherchedienste der Universitätsbibliothek angebunden. Die Akzeptanz der einzelnen Systemausprägungen konnte dann durch die Analyse von Logfiles und durch Datenbankabfragen ausgewertet werden. Für die Qualität der Erschließung wurde auf einen Methodenmix zurückgegriffen, der im Verlauf des Projektes immer wieder an aktuelle Entwicklungen und an die Ergebnisse aus den vorangegangenen Analysen angepaßt wurde. Die Tags wurden hinsichtlich ihres Beitrags zum Information Retrieval mit Verfahren der automatischen Indexierung von Volltexten sowie mit der Erschließung durch Fachreferenten verglichen. Am Schluss sollte eine gut begründete Empfehlung stehen, wie bisher nicht erschlossene Dokumente am besten indexiert werden können: automatisch, mit Tags oder durch eine Kombination von beiden Verfahren.
    Content
    "Was vom Tagging übrig blieb: Empfehlungen und Fazit - Akzeptanz des Taggings Es kann von einer grundsätzlich hohen Bereitschaft der Nutzer ausgegangen werden, wissenschaftliche Quellen durch Tags zu organisieren und zu erschließen. Diese Bereitschaft hängt allerdings wesentlich davon ab, ob ein System durch entsprechende Datenbestände genügend Ergebnisse liefert, um für eine Recherche reizvoll zu erscheinen. Tagging-Systeme, die als "Insellösung" auf die Nutzer einer einzelnen Institution beschränkt sind, werden deshalb nicht ausreichend angenommen. Anbindungen an externe Dienste, deren Datenbestand sich aus vielen verschiedenen Quellen und Verknüpfungen speist, erfahren dagegen eine sehr gute Resonanz. Wissenschaftlichen Bibliotheken wird deshalb empfohlen, möglichst schnelle und einfache Verlinkungen zu erfolgreichen Tagging-Plattformen wie BibSonomy oder Citeulike anzubieten. Die Anzeige der dort verfügbaren Daten im eigenen Katalog ist ebenfalls wünschenswert und wird von den Nutzern befürwortet. - Verfahren zur Analyse von Tagging-Daten Für die Analyse der äußerst heterogenen Textdaten, die in Tagging-Systemen entstehen, wurden spezifische Verfahren entwickelt und angewendet, die je nach Datenausschnitt und Erkenntniszweck optimiert wurden. Nach erfolgreichen Testläufen wurde der Methodenmix jeweils für größere Datenmengen eingesetzt, um die aus den explorativen Studien gewonnen Hypothesen zu überprüfen. Dieses Vorgehen hat sich als äußerst fruchtbar herausgestellt. Alle durchgeführten Schritte und die daraus gewonnenen Erkenntnisse wurden in diversen Artikeln und Beiträgen veröffentlicht sowie auf zahlreichen nationalen und internationalen Konferenzen vorgestellt, um sie der Wissenschaftsgemeinschaft zur Verfügung zu stellen.
    - Struktur der Tags Der Vergleich von zwei großen Tagging-Systemen hat große Ähnlichkeiten in der grammatikalischen Struktur der Tagging-Daten ergeben. Es werden mehrheitlich Substantive bzw. Eigennamen zur Erschließung sowie auch Verben zur Organisation der Quellen eingesetzt. Systembedingt kann außerdem eine große Menge von Wortkombinationen und Wortneuschöpfungen konstatiert werden, die aus den unterschiedlichsten Beweggründen und für sehr unterschiedliche Zwecke gebildet werden. Nur ein geringer Teil der Tags entspricht den formalen Kriterien kontrollierter Vokabulare. Eine besondere Hierarchisierung der Tags innerhalb eines Tagging-Systems über den Indikator der Häufigkeit der Nutzung hinaus hat sich nicht ergeben. In inhaltlicher Hinsicht hat sich eine klare Dominanz informatiknaher bzw. naturwissenschaftlicher Disziplinen gezeigt, wobei es sich hierbei um systemspezifische Präferenzen handelt. Insgesamt ist eine klare Tendenz zu zunehmender inhaltlicher Diversifikation in den Tagging-Systemen zu erkennen, was mit hoher Wahrscheinlichkeit der wachsenden Akzeptanz durch breitere Nutzergruppen zuzuschreiben ist. - Qualität der Tags Bei der Evaluation der Qualität der Tags bestätigte sich die Einschätzung, dass sich die Verschlagwortung mittels Tagging von jener durch Fachreferenten grundsätzlich unterscheidet. Nur ein kleiner Teil der Konzepte wurde in den beiden Systemen semantisch identisch oder wenigstens analog vergeben. Grundsätzlich liegen für eine Ressource fast immer mehr Tags als Schlagwörter vor, die zudem wesentlich häufiger exklusiv im Tagging-System zu finden sind. Diese Tatsache berührt jedoch nicht den inhaltlichen Erschließungsgrad einer Quelle, der sich trotz einer geringeren Anzahl an SWD-Schlagwörtern pro Ressource in beiden Systemen als gleichwertig gezeigt hat. Dennoch ist das Ausmaß der semantischen Abdeckung des Taggings überraschend, da sie der allgemeinen Erwartungshaltung von einer deutlich höheren Qualität der Verschlagwortung durch die professionelle Inhaltserschließung teilweise widerspricht. Diese Erwartung ist zumindest bezüglich der inhaltlichen Dimension zu relativieren.
    - Fazit Der Beitrag des Taggings im Rahmen des bibliothekarischen Kontextes ist vor allem in der ergänzenden Erweiterung der Recherche- und Literaturverwaltungsfunktionalitäten der Online-Kataloge zu sehen. Durch Tagging können diese um eine nutzerorientierte Komponente ergänzt werden und signifkant an Attraktivität gewinnen. Systeme mit einem begrenzten Nutzerkreis sind allerdings zugunsten der Anbindung an etablierte Systeme zu vernachlässigen. Diese können einen parallel existierenden Zugang zu den vorhandenen Ressourcen liefern, der seine Stärken in einer explorativen, eher "unscharfen" Recherche entfaltet. Somit wird einem speziellen Bedürfnis der Nutzerinnen und Nutzer Rechnung getragen, dem durch die voraussetzungsreiche Verwendung von präzisen bibliothekarischen Schlagwörtern nicht immer entsprochen werden kann. Bezüglich der inhaltlichen Abdeckung einer Ressource erfüllt das Tagging jedenfalls die Anforderungen eines Recherchesystems, insofern eine ausreichende Mindestanzahl von Tags vorliegt. Natürlich ist es sehr wichtig, die Nutzerinnen und Nutzer ausreichend darüber zu informieren, dass Tagging - wie alle anderen Erschließungsmethoden auch - keine vollständige Abbildung der verfügbaren Ressourcen leistet. Es stellt lediglich einen von verschiedenen Zugangswegen mit spezifischen Besonderheiten und Ergebnissen zur Verfügung. Eine Kombination der Erschließungsverfahren "Fachreferenten", "Tagging" und "automatisch" ist hingegen nur für sehr spezielle Zielsetzungen und als Abfolge von Ergänzungs- und Aktualisierungsschritten sinnvoll. Eine gleichzeitige Integration der Verfahren würde aufgrund ihrer erheblichen Unterschiede eine deutliche Verschlechterung der Erschließungsqualität zur Folge haben. Sinnvoll ist daher eine gleichberechtigte Bereitstellung dieser Zugangswege bei sichtbarer Trennung für die Nutzer. Auf diese Weise können die Vorteile aller Verfahren genutzt werden, ohne sich ihre jeweiligen Nachteile zu eigen zu machen."
    Object
    Web 2.0
    Theme
    Social tagging
  5. Heuvel, C. van den: Multidimensional classifications : past and future conceptualizations and visualizations (2012) 0.07
    0.07101037 = product of:
      0.1704249 = sum of:
        0.02935275 = weight(_text_:web in 632) [ClassicSimilarity], result of:
          0.02935275 = score(doc=632,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=632)
        0.040716566 = weight(_text_:world in 632) [ClassicSimilarity], result of:
          0.040716566 = score(doc=632,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=632)
        0.05410469 = weight(_text_:wide in 632) [ClassicSimilarity], result of:
          0.05410469 = score(doc=632,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.342674 = fieldWeight in 632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=632)
        0.02935275 = weight(_text_:web in 632) [ClassicSimilarity], result of:
          0.02935275 = score(doc=632,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=632)
        0.016898135 = product of:
          0.03379627 = sum of:
            0.03379627 = weight(_text_:22 in 632) [ClassicSimilarity], result of:
              0.03379627 = score(doc=632,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.2708308 = fieldWeight in 632, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=632)
          0.5 = coord(1/2)
      0.41666666 = coord(5/12)
    
    Abstract
    This paper maps the concepts "space" and "dimensionality" in classifications, in particular in visualizations hereof, from a historical perspective. After a historical excursion in the domain of classification theory of what in mathematics is known as dimensionality reduction in representations of a single universe of knowledge, its potentiality will be explored for information retrieval and navigation in the multiverse of the World Wide Web.
    Date
    22. 2.2013 11:31:25
  6. Choi, Y.: ¬A complete assessment of tagging quality : a consolidated methodology (2015) 0.07
    0.07080228 = product of:
      0.28320912 = sum of:
        0.23289011 = weight(_text_:tagging in 1730) [ClassicSimilarity], result of:
          0.23289011 = score(doc=1730,freq=16.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            1.1069763 = fieldWeight in 1730, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=1730)
        0.025159499 = weight(_text_:web in 1730) [ClassicSimilarity], result of:
          0.025159499 = score(doc=1730,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 1730, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1730)
        0.025159499 = weight(_text_:web in 1730) [ClassicSimilarity], result of:
          0.025159499 = score(doc=1730,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 1730, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1730)
      0.25 = coord(3/12)
    
    Abstract
    This paper presents a methodological discussion of a study of tagging quality in subject indexing. The data analysis in the study was divided into 3 phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of the semantic values of tags. To analyze indexing consistency, this study employed the vector space model-based indexing consistency measures. An analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. To further investigate the semantic values of tags at various levels of specificity, a latent semantic analysis (LSA) was conducted. To test statistical significance for the relation between tag specificity and semantic quality, correlation analysis was conducted. This research demonstrates the potential of tags for web document indexing with a complete assessment of tagging quality and provides a basis for further study of the strengths and limitations of tagging.
    Theme
    Social tagging
  7. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.07
    0.07018908 = product of:
      0.1684538 = sum of:
        0.04108529 = weight(_text_:web in 168) [ClassicSimilarity], result of:
          0.04108529 = score(doc=168,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35328537 = fieldWeight in 168, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.032903954 = weight(_text_:world in 168) [ClassicSimilarity], result of:
          0.032903954 = score(doc=168,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.24022943 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.043723192 = weight(_text_:wide in 168) [ClassicSimilarity], result of:
          0.043723192 = score(doc=168,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.2769224 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.04108529 = weight(_text_:web in 168) [ClassicSimilarity], result of:
          0.04108529 = score(doc=168,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35328537 = fieldWeight in 168, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.009656077 = product of:
          0.019312155 = sum of:
            0.019312155 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.019312155 = score(doc=168,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.41666666 = coord(5/12)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    World wide web
    RSWK
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Subject
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    World wide web
  8. Oliveira Machado, L.M.; Souza, R.R.; Simões, M. da Graça: Semantic web or web of data? : a diachronic study (1999 to 2017) of the publications of Tim Berners-Lee and the World Wide Web Consortium (2019) 0.07
    0.06893462 = product of:
      0.20680386 = sum of:
        0.06953719 = weight(_text_:web in 5300) [ClassicSimilarity], result of:
          0.06953719 = score(doc=5300,freq=22.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.59793836 = fieldWeight in 5300, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.029083263 = weight(_text_:world in 5300) [ClassicSimilarity], result of:
          0.029083263 = score(doc=5300,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.21233483 = fieldWeight in 5300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.038646206 = weight(_text_:wide in 5300) [ClassicSimilarity], result of:
          0.038646206 = score(doc=5300,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.24476713 = fieldWeight in 5300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.06953719 = weight(_text_:web in 5300) [ClassicSimilarity], result of:
          0.06953719 = score(doc=5300,freq=22.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.59793836 = fieldWeight in 5300, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
      0.33333334 = coord(4/12)
    
    Abstract
    The web has been, in the last decades, the place where information retrieval achieved its maximum importance, given its ubiquity and the sheer volume of information. However, its exponential growth made the retrieval task increasingly hard, relying in its effectiveness on idiosyncratic and somewhat biased ranking algorithms. To deal with this problem, a "new" web, called the Semantic Web (SW), was proposed, bringing along concepts like "Web of Data" and "Linked Data," although the definitions and connections among these concepts are often unclear. Based on a qualitative approach built over a literature review, a definition of SW is presented, discussing the related concepts sometimes used as synonyms. It concludes that the SW is a comprehensive and ambitious construct that includes the great purpose of making the web a global database. It also follows the specifications developed and/or associated with its operationalization and the necessary procedures for the connection of data in an open format on the web. The goals of this comprehensive SW are the union of two outcomes still tenuously connected: the virtually unlimited possibility of connections between data-the web domain-with the potentiality of the automated inference of "intelligent" systems-the semantic component.
    Theme
    Semantic Web
  9. Vaidya, P.; Harinarayana, N.S.: ¬The comparative and analytical study of LibraryThing tags with Library of Congress Subject Headings (2016) 0.07
    0.06883134 = product of:
      0.206494 = sum of:
        0.11644506 = weight(_text_:tagging in 2492) [ClassicSimilarity], result of:
          0.11644506 = score(doc=2492,freq=4.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.55348814 = fieldWeight in 2492, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=2492)
        0.025159499 = weight(_text_:web in 2492) [ClassicSimilarity], result of:
          0.025159499 = score(doc=2492,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 2492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2492)
        0.025159499 = weight(_text_:web in 2492) [ClassicSimilarity], result of:
          0.025159499 = score(doc=2492,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 2492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2492)
        0.039729945 = product of:
          0.07945989 = sum of:
            0.07945989 = weight(_text_:2.0 in 2492) [ClassicSimilarity], result of:
              0.07945989 = score(doc=2492,freq=2.0), product of:
                0.20667298 = queryWeight, product of:
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.035634913 = queryNorm
                0.3844716 = fieldWeight in 2492, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2492)
          0.5 = coord(1/2)
      0.33333334 = coord(4/12)
    
    Abstract
    The internet in its Web 2.0 version has given an opportunity among users to be participative and the chance to enhance the existing system, which makes it dynamic and collaborative. The activity of social tagging among researchers to organize the digital resources is an interesting study among information professionals. The one way of organizing the resources for future retrieval through these user-generated terms makes an interesting analysis by comparing them with professionally created controlled vocabularies. Here in this study, an attempt has been made to compare Library of Congress Subject Headings (LCSH) terms with LibraryThing social tags. In this comparative analysis, the results show that social tags can be used to enhance the metadata for information retrieval. But still, the uncontrolled nature of social tags is a concern and creates uncertainty among researchers.
    Theme
    Social tagging
  10. Haustein, S.; Sugimoto, C.; Larivière, V.: Social media in scholarly communication : Guest editorial (2015) 0.07
    0.06770995 = product of:
      0.1625039 = sum of:
        0.08155354 = weight(_text_:filter in 3809) [ClassicSimilarity], result of:
          0.08155354 = score(doc=3809,freq=4.0), product of:
            0.24899386 = queryWeight, product of:
              6.987357 = idf(docFreq=110, maxDocs=44218)
              0.035634913 = queryNorm
            0.32753235 = fieldWeight in 3809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.987357 = idf(docFreq=110, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.028129177 = weight(_text_:web in 3809) [ClassicSimilarity], result of:
          0.028129177 = score(doc=3809,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.24187797 = fieldWeight in 3809, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.017449958 = weight(_text_:world in 3809) [ClassicSimilarity], result of:
          0.017449958 = score(doc=3809,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.12740089 = fieldWeight in 3809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.028129177 = weight(_text_:web in 3809) [ClassicSimilarity], result of:
          0.028129177 = score(doc=3809,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.24187797 = fieldWeight in 3809, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.007242058 = product of:
          0.014484116 = sum of:
            0.014484116 = weight(_text_:22 in 3809) [ClassicSimilarity], result of:
              0.014484116 = score(doc=3809,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.116070345 = fieldWeight in 3809, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3809)
          0.5 = coord(1/2)
      0.41666666 = coord(5/12)
    
    Abstract
    One of the solutions to help scientists filter the most relevant publications and, thus, to stay current on developments in their fields during the transition from "little science" to "big science", was the introduction of citation indexing as a Wellsian "World Brain" (Garfield, 1964) of scientific information: It is too much to expect a research worker to spend an inordinate amount of time searching for the bibliographic descendants of antecedent papers. It would not be excessive to demand that the thorough scholar check all papers that have cited or criticized such papers, if they could be located quickly. The citation index makes this check practicable (Garfield, 1955, p. 108). In retrospective, citation indexing can be perceived as a pre-social web version of crowdsourcing, as it is based on the concept that the community of citing authors outperforms indexers in highlighting cognitive links between papers, particularly on the level of specific ideas and concepts (Garfield, 1983). Over the last 50 years, citation analysis and more generally, bibliometric methods, have developed from information retrieval tools to research evaluation metrics, where they are presumed to make scientific funding more efficient and effective (Moed, 2006). However, the dominance of bibliometric indicators in research evaluation has also led to significant goal displacement (Merton, 1957) and the oversimplification of notions of "research productivity" and "scientific quality", creating adverse effects such as salami publishing, honorary authorships, citation cartels, and misuse of indicators (Binswanger, 2015; Cronin and Sugimoto, 2014; Frey and Osterloh, 2006; Haustein and Larivière, 2015; Weingart, 2005).
    Furthermore, the rise of the web, and subsequently, the social web, has challenged the quasi-monopolistic status of the journal as the main form of scholarly communication and citation indices as the primary assessment mechanisms. Scientific communication is becoming more open, transparent, and diverse: publications are increasingly open access; manuscripts, presentations, code, and data are shared online; research ideas and results are discussed and criticized openly on blogs; and new peer review experiments, with open post publication assessment by anonymous or non-anonymous referees, are underway. The diversification of scholarly production and assessment, paired with the increasing speed of the communication process, leads to an increased information overload (Bawden and Robinson, 2008), demanding new filters. The concept of altmetrics, short for alternative (to citation) metrics, was created out of an attempt to provide a filter (Priem et al., 2010) and to steer against the oversimplification of the measurement of scientific success solely on the basis of number of journal articles published and citations received, by considering a wider range of research outputs and metrics (Piwowar, 2013). Although the term altmetrics was introduced in a tweet in 2010 (Priem, 2010), the idea of capturing traces - "polymorphous mentioning" (Cronin et al., 1998, p. 1320) - of scholars and their documents on the web to measure "impact" of science in a broader manner than citations was introduced years before, largely in the context of webometrics (Almind and Ingwersen, 1997; Thelwall et al., 2005):
    There will soon be a critical mass of web-based digital objects and usage statistics on which to model scholars' communication behaviors - publishing, posting, blogging, scanning, reading, downloading, glossing, linking, citing, recommending, acknowledging - and with which to track their scholarly influence and impact, broadly conceived and broadly felt (Cronin, 2005, p. 196). A decade after Cronin's prediction and five years after the coining of altmetrics, the time seems ripe to reflect upon the role of social media in scholarly communication. This Special Issue does so by providing an overview of current research on the indicators and metrics grouped under the umbrella term of altmetrics, on their relationships with traditional indicators of scientific activity, and on the uses that are made of the various social media platforms - on which these indicators are based - by scientists of various disciplines.
    Date
    20. 1.2015 18:30:22
  11. Das, A.; Jain, A.: Indexing the World Wide Web : the journey so far (2012) 0.07
    0.06736527 = product of:
      0.20209579 = sum of:
        0.043577533 = weight(_text_:web in 95) [ClassicSimilarity], result of:
          0.043577533 = score(doc=95,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 95, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=95)
        0.049355935 = weight(_text_:world in 95) [ClassicSimilarity], result of:
          0.049355935 = score(doc=95,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.36034414 = fieldWeight in 95, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=95)
        0.06558479 = weight(_text_:wide in 95) [ClassicSimilarity], result of:
          0.06558479 = score(doc=95,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.4153836 = fieldWeight in 95, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=95)
        0.043577533 = weight(_text_:web in 95) [ClassicSimilarity], result of:
          0.043577533 = score(doc=95,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 95, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=95)
      0.33333334 = coord(4/12)
    
    Abstract
    In this chapter, the authors describe the key indexing components of today's web search engines. As the World Wide Web has grown, the systems and methods for indexing have changed significantly. The authors present the data structures used, the features extracted, the infrastructure needed, and the options available for designing a brand new search engine. Techniques are highlighted that improve relevance of results, discuss trade-offs to best utilize machine resources, and cover distributed processing concepts in this context. In particular, the authors delve into the topics of indexing phrases instead of terms, storage in memory vs. on disk, and data partitioning. Some thoughts on information organization for the newly emerging data-forms conclude the chapter.
  12. Shiri, A.: Powering search : the role of thesauri in new information environments (2012) 0.07
    0.06736527 = product of:
      0.20209579 = sum of:
        0.043577533 = weight(_text_:web in 1322) [ClassicSimilarity], result of:
          0.043577533 = score(doc=1322,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 1322, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1322)
        0.049355935 = weight(_text_:world in 1322) [ClassicSimilarity], result of:
          0.049355935 = score(doc=1322,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.36034414 = fieldWeight in 1322, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1322)
        0.06558479 = weight(_text_:wide in 1322) [ClassicSimilarity], result of:
          0.06558479 = score(doc=1322,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.4153836 = fieldWeight in 1322, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1322)
        0.043577533 = weight(_text_:web in 1322) [ClassicSimilarity], result of:
          0.043577533 = score(doc=1322,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 1322, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1322)
      0.33333334 = coord(4/12)
    
    Content
    Thesauri : introduction and recent developments -- Thesauri in interactive information retrieval -- User-centered approach to the evaluation of thesauri : query formulation and expansion -- Thesauri in web-based search systems -- Thesaurus-based search and browsing functionalities in new thesaurus construction standards -- Design of search user interfaces for thesauri -- Design of user interfaces for multilingual and meta-thesauri -- User-centered evaluation of thesaurus-enhanced search user interfaces -- Guidelines for the design of thesaurus-enhanced search user interfaces -- Current trends and developments.
    LCSH
    World Wide Web
    Subject
    World Wide Web
  13. Niggemann, E.: Im weiten endlosen Meer des World Wide Web : vom Sammelauftrag der Gedächtnisorganisationen (2015) 0.07
    0.06736527 = product of:
      0.20209579 = sum of:
        0.043577533 = weight(_text_:web in 2529) [ClassicSimilarity], result of:
          0.043577533 = score(doc=2529,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 2529, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2529)
        0.049355935 = weight(_text_:world in 2529) [ClassicSimilarity], result of:
          0.049355935 = score(doc=2529,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.36034414 = fieldWeight in 2529, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=2529)
        0.06558479 = weight(_text_:wide in 2529) [ClassicSimilarity], result of:
          0.06558479 = score(doc=2529,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.4153836 = fieldWeight in 2529, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2529)
        0.043577533 = weight(_text_:web in 2529) [ClassicSimilarity], result of:
          0.043577533 = score(doc=2529,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 2529, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2529)
      0.33333334 = coord(4/12)
    
    Abstract
    Seit 2006 gehört zum gesetzlichen Auftrag der Deutschen Nationalbibliothek auch das Sammeln von Medienwerken, die in unkörperlicher Form der Öffentlichkeit zugänglich gemacht werden. Dieser Auftrag lässt Interpretationen zu, und in der Tat ist nicht nur der Umgang mit diesen Werken, sondern bereits die Definition von Sammelkriterien Inhalt von Projekten und Überlegungen. Für das Sammeln von Werken, die Bestandteil des World Wide Web sind, müssen Grenzen festgelegt werden - das Web ist zu weit und scheint endlos. Auch für die notwendigen Kooperationen mit und Abgrenzungen zu anderen Gedächtnisorganisationen sind Kriterien und Definitionen erforderlich. Der vorliegende Beitrag zum Thema Webharvesting versteht sich als Angebot zum Gedankenaustausch über Sammlungsabstimmungen national wie international, innerhalb der bibliothekarischen wie auch in der gesamten Kulturwelt aus Sicht der Deutschen Nationalbibliothek.
  14. Lanier, J.: Zehn Gründe, warum du deine Social Media Accounts sofort löschen musst (2018) 0.07
    0.06730254 = product of:
      0.1615261 = sum of:
        0.023720603 = weight(_text_:web in 4448) [ClassicSimilarity], result of:
          0.023720603 = score(doc=4448,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.2039694 = fieldWeight in 4448, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4448)
        0.032903954 = weight(_text_:world in 4448) [ClassicSimilarity], result of:
          0.032903954 = score(doc=4448,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.24022943 = fieldWeight in 4448, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=4448)
        0.043723192 = weight(_text_:wide in 4448) [ClassicSimilarity], result of:
          0.043723192 = score(doc=4448,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.2769224 = fieldWeight in 4448, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4448)
        0.023720603 = weight(_text_:web in 4448) [ClassicSimilarity], result of:
          0.023720603 = score(doc=4448,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.2039694 = fieldWeight in 4448, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4448)
        0.037457753 = product of:
          0.074915506 = sum of:
            0.074915506 = weight(_text_:2.0 in 4448) [ClassicSimilarity], result of:
              0.074915506 = score(doc=4448,freq=4.0), product of:
                0.20667298 = queryWeight, product of:
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.035634913 = queryNorm
                0.36248332 = fieldWeight in 4448, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4448)
          0.5 = coord(1/2)
      0.41666666 = coord(5/12)
    
    RSWK
    Internetkritik / World Wide Web 2.0 / Soziokultureller Wandel / Social Media / Soziale Netzwerke (VÖB)
    Subject
    Internetkritik / World Wide Web 2.0 / Soziokultureller Wandel / Social Media / Soziale Netzwerke (VÖB)
  15. Huang, S.-L.; Lin, S.-C.; Chan, Y.-C.: Investigating effectiveness and user acceptance of semantic social tagging for knowledge sharing (2012) 0.07
    0.06704194 = product of:
      0.26816776 = sum of:
        0.21784876 = weight(_text_:tagging in 2732) [ClassicSimilarity], result of:
          0.21784876 = score(doc=2732,freq=14.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            1.0354816 = fieldWeight in 2732, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=2732)
        0.025159499 = weight(_text_:web in 2732) [ClassicSimilarity], result of:
          0.025159499 = score(doc=2732,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 2732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2732)
        0.025159499 = weight(_text_:web in 2732) [ClassicSimilarity], result of:
          0.025159499 = score(doc=2732,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 2732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2732)
      0.25 = coord(3/12)
    
    Abstract
    Social tagging systems enable users to assign arbitrary tags to various digital resources. However, they face vague-meaning problems when users retrieve or present resources with the keyword-based tags. In order to solve these problems, this study takes advantage of Semantic Web technology and the topological characteristics of knowledge maps to develop a system that comprises a semantic tagging mechanism and triple-pattern and visual searching mechanisms. A field experiment was conducted to evaluate the effectiveness and user acceptance of these mechanisms in a knowledge sharing context. The results show that the semantic social tagging system is more effective than a keyword-based system. The visualized knowledge map helps users capture an overview of the knowledge domain, reduce cognitive effort for the search, and obtain more enjoyment. Traditional keyword tagging with a keyword search still has the advantage of ease of use and the users had higher intention to use it. This study also proposes directions for future development of semantic social tagging systems.
    Theme
    Social tagging
  16. Derek Doran, D.; Gokhale, S.S.: ¬A classification framework for web robots (2012) 0.07
    0.06588599 = product of:
      0.26354396 = sum of:
        0.067092 = weight(_text_:web in 505) [ClassicSimilarity], result of:
          0.067092 = score(doc=505,freq=8.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5769126 = fieldWeight in 505, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=505)
        0.12935995 = weight(_text_:log in 505) [ClassicSimilarity], result of:
          0.12935995 = score(doc=505,freq=2.0), product of:
            0.22837062 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.035634913 = queryNorm
            0.5664474 = fieldWeight in 505, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0625 = fieldNorm(doc=505)
        0.067092 = weight(_text_:web in 505) [ClassicSimilarity], result of:
          0.067092 = score(doc=505,freq=8.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5769126 = fieldWeight in 505, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=505)
      0.25 = coord(3/12)
    
    Abstract
    The behavior of modern web robots varies widely when they crawl for different purposes. In this article, we present a framework to classify these web robots from two orthogonal perspectives, namely, their functionality and the types of resources they consume. Applying the classification framework to a year-long access log from the UConn SoE web server, we present trends that point to significant differences in their crawling behavior.
  17. Liu, B.: Web data mining : exploring hyperlinks, contents, and usage data (2011) 0.07
    0.06585966 = product of:
      0.19757898 = sum of:
        0.060475912 = weight(_text_:web in 354) [ClassicSimilarity], result of:
          0.060475912 = score(doc=354,freq=26.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.520022 = fieldWeight in 354, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
        0.032903954 = weight(_text_:world in 354) [ClassicSimilarity], result of:
          0.032903954 = score(doc=354,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.24022943 = fieldWeight in 354, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
        0.043723192 = weight(_text_:wide in 354) [ClassicSimilarity], result of:
          0.043723192 = score(doc=354,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.2769224 = fieldWeight in 354, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
        0.060475912 = weight(_text_:web in 354) [ClassicSimilarity], result of:
          0.060475912 = score(doc=354,freq=26.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.520022 = fieldWeight in 354, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
      0.33333334 = coord(4/12)
    
    Abstract
    Web mining aims to discover useful information and knowledge from the Web hyperlink structure, page contents, and usage data. Although Web mining uses many conventional data mining techniques, it is not purely an application of traditional data mining due to the semistructured and unstructured nature of the Web data and its heterogeneity. It has also developed many of its own algorithms and techniques. Liu has written a comprehensive text on Web data mining. Key topics of structure mining, content mining, and usage mining are covered both in breadth and in depth. His book brings together all the essential concepts and algorithms from related areas such as data mining, machine learning, and text processing to form an authoritative and coherent text. The book offers a rich blend of theory and practice, addressing seminal research ideas, as well as examining the technology from a practical point of view. It is suitable for students, researchers and practitioners interested in Web mining both as a learning text and a reference book. Lecturers can readily use it for classes on data mining, Web mining, and Web search. Additional teaching materials such as lecture slides, datasets, and implemented algorithms are available online.
    Content
    Inhalt: 1. Introduction 2. Association Rules and Sequential Patterns 3. Supervised Learning 4. Unsupervised Learning 5. Partially Supervised Learning 6. Information Retrieval and Web Search 7. Social Network Analysis 8. Web Crawling 9. Structured Data Extraction: Wrapper Generation 10. Information Integration
    RSWK
    World Wide Web / Data Mining
    Subject
    World Wide Web / Data Mining
  18. Saabiyeh, N.: What is a good ontology semantic similarity measure that considers multiple inheritance cases of concepts? (2018) 0.07
    0.06550072 = product of:
      0.19650216 = sum of:
        0.050840456 = weight(_text_:web in 4530) [ClassicSimilarity], result of:
          0.050840456 = score(doc=4530,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43716836 = fieldWeight in 4530, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
        0.040716566 = weight(_text_:world in 4530) [ClassicSimilarity], result of:
          0.040716566 = score(doc=4530,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 4530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
        0.05410469 = weight(_text_:wide in 4530) [ClassicSimilarity], result of:
          0.05410469 = score(doc=4530,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.342674 = fieldWeight in 4530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
        0.050840456 = weight(_text_:web in 4530) [ClassicSimilarity], result of:
          0.050840456 = score(doc=4530,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43716836 = fieldWeight in 4530, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
      0.33333334 = coord(4/12)
    
    Abstract
    I need to measure semantic similarity between CSO ontology concepts, depending on Ontology structure (concept path, depth, least common subsumer (LCS) ...). CSO (Computer Science Ontology) is a large-scale ontology of research areas. A concepts in CSO may have multiple parents/super concepts (i.e. a concept may be a child of many other concepts), e.g. : (world wide web) is parent of (semantic web) (semantics) is parent of (semantic web) I found some measures that meet my needs, but the papers proposing these measures are not cited, so i got hesitated. I also found a measure that depends on weighted edges, but multiple inheritance (super concepts) is not considered..
  19. Tredinnick, L.: Each one of us was several : networks, rhizomes and Web organisms (2013) 0.06
    0.06459736 = product of:
      0.19379207 = sum of:
        0.056258354 = weight(_text_:web in 1364) [ClassicSimilarity], result of:
          0.056258354 = score(doc=1364,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.48375595 = fieldWeight in 1364, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1364)
        0.034899916 = weight(_text_:world in 1364) [ClassicSimilarity], result of:
          0.034899916 = score(doc=1364,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.25480178 = fieldWeight in 1364, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1364)
        0.046375446 = weight(_text_:wide in 1364) [ClassicSimilarity], result of:
          0.046375446 = score(doc=1364,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.29372054 = fieldWeight in 1364, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1364)
        0.056258354 = weight(_text_:web in 1364) [ClassicSimilarity], result of:
          0.056258354 = score(doc=1364,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.48375595 = fieldWeight in 1364, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1364)
      0.33333334 = coord(4/12)
    
    Abstract
    This paper develops a conceptual analysis of hypertext and the World Wide Web by exploring the contrasting metaphors of the network and the rhizome. The idea of the network has influenced the conceptual thinking about both the web, and its wider socio-cultural influence. The paper develops an alternative description of the structure of hypertext and the web in terms of interrupted and dissipated energy flows. It concludes that the web should be considered not as a particular set of protocols and technological standards, nor as an interlinked set of technologically mediated services, but as a dynamic reorganisation of the socio-cultural system itself that at its inception has become associated with particular forms of technology, but which has no determinate boundaries, and which should properly be constituted in the spaces between technologies, and the spaces between persons.
  20. Next generation search engines : advanced models for information retrieval (2012) 0.06
    0.064427115 = product of:
      0.15462509 = sum of:
        0.034307953 = weight(_text_:tagging in 357) [ClassicSimilarity], result of:
          0.034307953 = score(doc=357,freq=2.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.163073 = fieldWeight in 357, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
        0.03922426 = weight(_text_:web in 357) [ClassicSimilarity], result of:
          0.03922426 = score(doc=357,freq=28.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3372827 = fieldWeight in 357, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
        0.014541632 = weight(_text_:world in 357) [ClassicSimilarity], result of:
          0.014541632 = score(doc=357,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.10616741 = fieldWeight in 357, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
        0.027326997 = weight(_text_:wide in 357) [ClassicSimilarity], result of:
          0.027326997 = score(doc=357,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.17307651 = fieldWeight in 357, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
        0.03922426 = weight(_text_:web in 357) [ClassicSimilarity], result of:
          0.03922426 = score(doc=357,freq=28.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3372827 = fieldWeight in 357, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
      0.41666666 = coord(5/12)
    
    Abstract
    The main goal of this book is to transfer new research results from the fields of advanced computer sciences and information science to the design of new search engines. The readers will have a better idea of the new trends in applied research. The achievement of relevant, organized, sorted, and workable answers- to name but a few - from a search is becoming a daily need for enterprises and organizations, and, to a greater extent, for anyone. It does not consist of getting access to structural information as in standard databases; nor does it consist of searching information strictly by way of a combination of key words. It goes far beyond that. Whatever its modality, the information sought should be identified by the topics it contains, that is to say by its textual, audio, video or graphical contents. This is not a new issue. However, recent technological advances have completely changed the techniques being used. New Web technologies, the emergence of Intranet systems and the abundance of information on the Internet have created the need for efficient search and information access tools.
    Recent technological progress in computer science, Web technologies, and constantly evolving information available on the Internet has drastically changed the landscape of search and access to information. Web search has significantly evolved in recent years. In the beginning, web search engines such as Google and Yahoo! were only providing search service over text documents. Aggregated search was one of the first steps to go beyond text search, and was the beginning of a new era for information seeking and retrieval. These days, new web search engines support aggregated search over a number of vertices, and blend different types of documents (e.g., images, videos) in their search results. New search engines employ advanced techniques involving machine learning, computational linguistics and psychology, user interaction and modeling, information visualization, Web engineering, artificial intelligence, distributed systems, social networks, statistical analysis, semantic analysis, and technologies over query sessions. Documents no longer exist on their own; they are connected to other documents, they are associated with users and their position in a social network, and they can be mapped onto a variety of ontologies. Similarly, retrieval tasks have become more interactive and are solidly embedded in a user's geospatial, social, and historical context. It is conjectured that new breakthroughs in information retrieval will not come from smarter algorithms that better exploit existing information sources, but from new retrieval algorithms that can intelligently use and combine new sources of contextual metadata.
    With the rapid growth of web-based applications, such as search engines, Facebook, and Twitter, the development of effective and personalized information retrieval techniques and of user interfaces is essential. The amount of shared information and of social networks has also considerably grown, requiring metadata for new sources of information, like Wikipedia and ODP. These metadata have to provide classification information for a wide range of topics, as well as for social networking sites like Twitter, and Facebook, each of which provides additional preferences, tagging information and social contexts. Due to the explosion of social networks and other metadata sources, it is an opportune time to identify ways to exploit such metadata in IR tasks such as user modeling, query understanding, and personalization, to name a few. Although the use of traditional metadata such as html text, web page titles, and anchor text is fairly well-understood, the use of category information, user behavior data, and geographical information is just beginning to be studied. This book is intended for scientists and decision-makers who wish to gain working knowledge about search engines in order to evaluate available solutions and to dialogue with software and data providers.
    Content
    Enthält die Beiträge: Das, A., A. Jain: Indexing the World Wide Web: the journey so far. Ke, W.: Decentralized search and the clustering paradox in large scale information networks. Roux, M.: Metadata for search engines: what can be learned from e-Sciences? Fluhr, C.: Crosslingual access to photo databases. Djioua, B., J.-P. Desclés u. M. Alrahabi: Searching and mining with semantic categories. Ghorbel, H., A. Bahri u. R. Bouaziz: Fuzzy ontologies building platform for Semantic Web: FOB platform. Lassalle, E., E. Lassalle: Semantic models in information retrieval. Berry, M.W., R. Esau u. B. Kiefer: The use of text mining techniques in electronic discovery for legal matters. Sleem-Amer, M., I. Bigorgne u. S. Brizard u.a.: Intelligent semantic search engines for opinion and sentiment mining. Hoeber, O.: Human-centred Web search.
    Vert, S.: Extensions of Web browsers useful to knowledge workers. Chen, L.-C.: Next generation search engine for the result clustering technology. Biskri, I., L. Rompré: Using association rules for query reformulation. Habernal, I., M. Konopík u. O. Rohlík: Question answering. Grau, B.: Finding answers to questions, in text collections or Web, in open domain or specialty domains. Berri, J., R. Benlamri: Context-aware mobile search engine. Bouidghaghen, O., L. Tamine: Spatio-temporal based personalization for mobile search. Chaudiron, S., M. Ihadjadene: Studying Web search engines from a user perspective: key concepts and main approaches. Karaman, F.: Artificial intelligence enabled search engines (AIESE) and the implications. Lewandowski, D.: A framework for evaluating the retrieval effectiveness of search engines.

Languages

  • e 1762
  • d 413
  • f 3
  • i 3
  • a 1
  • hu 1
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 1868
  • el 217
  • m 194
  • s 69
  • x 38
  • r 17
  • b 6
  • n 2
  • i 1
  • p 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications