Search (4415 results, page 1 of 221)

  • × language_ss:"e"
  • × year_i:[2010 TO 2020}
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.25
    0.2512416 = product of:
      0.314052 = sum of:
        0.07364523 = product of:
          0.22093567 = sum of:
            0.22093567 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.22093567 = score(doc=400,freq=2.0), product of:
                0.39311135 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046368346 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.014734776 = weight(_text_:a in 400) [ClassicSimilarity], result of:
          0.014734776 = score(doc=400,freq=26.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.27559727 = fieldWeight in 400, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.22093567 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.22093567 = score(doc=400,freq=2.0), product of:
            0.39311135 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046368346 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 400) [ClassicSimilarity], result of:
              0.009472587 = score(doc=400,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Type
    a
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.22
    0.21995464 = product of:
      0.2749433 = sum of:
        0.04909682 = product of:
          0.14729045 = sum of:
            0.14729045 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14729045 = score(doc=5820,freq=2.0), product of:
                0.39311135 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.008615503 = weight(_text_:a in 5820) [ClassicSimilarity], result of:
          0.008615503 = score(doc=5820,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.16114321 = fieldWeight in 5820, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.20830014 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.20830014 = score(doc=5820,freq=4.0), product of:
            0.39311135 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046368346 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0089308405 = product of:
          0.017861681 = sum of:
            0.017861681 = weight(_text_:information in 5820) [ClassicSimilarity], result of:
              0.017861681 = score(doc=5820,freq=16.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21943474 = fieldWeight in 5820, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.21
    0.2102905 = product of:
      0.26286313 = sum of:
        0.061371025 = product of:
          0.18411307 = sum of:
            0.18411307 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.18411307 = score(doc=4997,freq=2.0), product of:
                0.39311135 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046368346 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.011797264 = weight(_text_:a in 4997) [ClassicSimilarity], result of:
          0.011797264 = score(doc=4997,freq=24.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22065444 = fieldWeight in 4997, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.18411307 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.18411307 = score(doc=4997,freq=2.0), product of:
            0.39311135 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046368346 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 4997) [ClassicSimilarity], result of:
              0.011163551 = score(doc=4997,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 4997, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
    Imprint
    Trento : University / Department of information engineering and computer science
  4. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.20
    0.19638728 = product of:
      0.4909682 = sum of:
        0.12274205 = product of:
          0.36822614 = sum of:
            0.36822614 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.36822614 = score(doc=1826,freq=2.0), product of:
                0.39311135 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046368346 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.36822614 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.36822614 = score(doc=1826,freq=2.0), product of:
            0.39311135 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046368346 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.4 = coord(2/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  5. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.17
    0.16510816 = product of:
      0.27518025 = sum of:
        0.007078358 = weight(_text_:a in 563) [ClassicSimilarity], result of:
          0.007078358 = score(doc=563,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 563, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.22093567 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.22093567 = score(doc=563,freq=2.0), product of:
            0.39311135 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046368346 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.04716621 = sum of:
          0.009472587 = weight(_text_:information in 563) [ClassicSimilarity], result of:
            0.009472587 = score(doc=563,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.116372846 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.037693623 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.037693623 = score(doc=563,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.6 = coord(3/5)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  6. Koster, L.; Heesakkers, D.: ¬The mobile library catalogue (2013) 0.09
    0.092891864 = product of:
      0.23222965 = sum of:
        0.009535614 = weight(_text_:a in 1479) [ClassicSimilarity], result of:
          0.009535614 = score(doc=1479,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 1479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=1479)
        0.22269404 = weight(_text_:91 in 1479) [ClassicSimilarity], result of:
          0.22269404 = score(doc=1479,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.86190623 = fieldWeight in 1479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.109375 = fieldNorm(doc=1479)
      0.4 = coord(2/5)
    
    Pages
    S.65-91
    Type
    a
  7. Fluhr, C.: Crosslingual access to photo databases (2012) 0.09
    0.090467945 = product of:
      0.1507799 = sum of:
        0.008173384 = weight(_text_:a in 93) [ClassicSimilarity], result of:
          0.008173384 = score(doc=93,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 93, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=93)
        0.095440306 = weight(_text_:91 in 93) [ClassicSimilarity], result of:
          0.095440306 = score(doc=93,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.3693884 = fieldWeight in 93, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046875 = fieldNorm(doc=93)
        0.04716621 = sum of:
          0.009472587 = weight(_text_:information in 93) [ClassicSimilarity], result of:
            0.009472587 = score(doc=93,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.116372846 = fieldWeight in 93, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=93)
          0.037693623 = weight(_text_:22 in 93) [ClassicSimilarity], result of:
            0.037693623 = score(doc=93,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 93, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=93)
      0.6 = coord(3/5)
    
    Abstract
    This paper is about search of photos in photo databases of agencies which sell photos over the Internet. The problem is far from the behavior of photo databases managed by librarians and also far from the corpora generally used for research purposes. The descriptions use mainly single words and it is well known that it is not the best way to have a good search. This increases the problem of semantic ambiguity. This problem of semantic ambiguity is crucial for cross-language querying. On the other hand, users are not aware of documentation techniques and use generally very simple queries but want to get precise answers. This paper gives the experience gained in a 3 year use (2006-2008) of a cross-language access to several of the main international commercial photo databases. The languages used were French, English, and German.
    Date
    17. 4.2012 14:25:22
    Pages
    S.78-91
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
    Type
    a
  8. Jaskolla, L.; Rugel, M.: Smart questions : steps towards an ontology of questions and answers (2014) 0.08
    0.07670945 = product of:
      0.12784907 = sum of:
        0.009010308 = weight(_text_:a in 3404) [ClassicSimilarity], result of:
          0.009010308 = score(doc=3404,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 3404, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3404)
        0.079533584 = weight(_text_:91 in 3404) [ClassicSimilarity], result of:
          0.079533584 = score(doc=3404,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.30782366 = fieldWeight in 3404, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3404)
        0.039305177 = sum of:
          0.007893822 = weight(_text_:information in 3404) [ClassicSimilarity], result of:
            0.007893822 = score(doc=3404,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.09697737 = fieldWeight in 3404, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3404)
          0.031411353 = weight(_text_:22 in 3404) [ClassicSimilarity], result of:
            0.031411353 = score(doc=3404,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 3404, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3404)
      0.6 = coord(3/5)
    
    Abstract
    The present essay is based on research funded by the German Ministry of Economics and Technology and carried out by the Munich School of Philosophy (Prof. Godehard Brüntrup) in cooperation with the IT company Comelio GmbH. It is concerned with setting up the philosophical framework for a systematic, hierarchical and categorical account of questions and answers in order to use this framework as an ontology for software engineers who create a tool for intelligent questionnaire design. In recent years, there has been considerable interest in programming software that enables users to create and carry out their own surveys. Considering the, to say the least, vast amount of areas of applications these software tools try to cover, it is surprising that most of the existing tools lack a systematic approach to what questions and answers really are and in what kind of systematic hierarchical relations different types of questions stand to each other. The theoretical background to this essay is inspired Barry Smith's theory of regional ontologies. The notion of ontology used in this essay can be defined by the following characteristics: (1) The basic notions of the ontology should be defined in a manner that excludes equivocations of any kind. They should also be presented in a way that allows for an easy translation into a semi-formal language, in order to secure easy applicability for software engineers. (2) The hierarchical structure of the ontology should be that of an arbor porphyriana.
    Date
    9. 2.2017 19:22:59
    Pages
    S.91-97
    Source
    Philosophy, computing and information science. Eds.: R. Hagengruber u. U.V. Riss
    Type
    a
  9. Svensson, L.G.; Jahns, Y.: PDF, CSV, RSS and other Acronyms : redefining the bibliographic services in the German National Library (2010) 0.07
    0.0744237 = product of:
      0.12403949 = sum of:
        0.0076151006 = weight(_text_:a in 3970) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=3970,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 3970, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3970)
        0.11247748 = weight(_text_:91 in 3970) [ClassicSimilarity], result of:
          0.11247748 = score(doc=3970,freq=4.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.43532842 = fieldWeight in 3970, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3970)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 3970) [ClassicSimilarity], result of:
              0.007893822 = score(doc=3970,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 3970, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3970)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In January 2010, the German National Library discontinued the print version of the national bibliography and replaced it with an online journal. This was the first step in a longer process of redefining the National Library's bibliographic services, leaving the field of traditional media - e. g. paper or CD-ROM databases - and focusing on publishing its data over the WWW. A new business model was set up - all web resources are now published in an extra bibliography series and the bibliographic data are freely available. Step by step the prices of the other bibliographic data will be also reduced. In the second stage of the project, the focus is on value-added services based on the National Library's catalogue. The main purpose is to introduce alerting services based on the user's search criteria offering different access methods such as RSS feeds, integration with e. g. Zotero, or export of the bibliographic data as a CSV or PDF file. Current standards of cataloguing remain a guide line to offer high-value end-user retrieval but they will be supplemented by automated indexing procedures to find & browse the growing number of documents. A transparent cataloguing policy and wellarranged selection menus are aimed.
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 91. Bibliography.
    Source
    http://www.ifla.org/files/hq/papers/ifla76/91-svensson-en.pdf
  10. Hangel, N.; Schmidt-Pfister, D.: Why do you publish? : on the tensions between generating scientific knowledge and publication pressure (2017) 0.07
    0.074192986 = product of:
      0.12365498 = sum of:
        0.0048162127 = weight(_text_:a in 4054) [ClassicSimilarity], result of:
          0.0048162127 = score(doc=4054,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.090081796 = fieldWeight in 4054, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4054)
        0.079533584 = weight(_text_:91 in 4054) [ClassicSimilarity], result of:
          0.079533584 = score(doc=4054,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.30782366 = fieldWeight in 4054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4054)
        0.039305177 = sum of:
          0.007893822 = weight(_text_:information in 4054) [ClassicSimilarity], result of:
            0.007893822 = score(doc=4054,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.09697737 = fieldWeight in 4054, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4054)
          0.031411353 = weight(_text_:22 in 4054) [ClassicSimilarity], result of:
            0.031411353 = score(doc=4054,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 4054, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4054)
      0.6 = coord(3/5)
    
    Abstract
    Purpose The purpose of this paper is to examine researchers' motivations to publish by comparing different career stages (PhD students; temporarily employed postdocs/new professors; scholars with permanent employment) with regard to epistemic, pragmatic, and personal motives. Design/methodology/approach This qualitative analysis is mainly based on semi-structured narrative interviews with 91 researchers in the humanities, social, and natural sciences, based at six renowned (anonymous) universities in Germany, the UK, and the USA. These narratives contain answers to the direct question "why do you publish?" as well as remarks on motivations to publish in relation to other questions and themes. The interdisciplinary interpretation is based on both sociological science studies and philosophy of science in practice. Findings At each career stage, epistemic, pragmatic, and personal motivations to publish are weighed differently. Confirming earlier studies, the authors find that PhD students and postdoctoral researchers in temporary positions mainly feel pressured to publish for career-related reasons. However, across status groups, researchers also want to publish in order to support collective knowledge generation. Research limitations/implications The sample of interviewees may be biased toward those interested in reflecting on their day-to-day work. Social implications Continuous and collective reflection is imperative for preventing uncritical internalization of pragmatic reasons to publish. Creating occasions for reflection is a task not only of researchers themselves, but also of administrators, funders, and other stakeholders. Originality/value Most studies have illuminated how researchers publish while adapting to or growing into the contemporary publish-or-perish culture. This paper addresses the rarely asked question why researchers publish at all.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 69(2017) no.5, S.529-544
    Type
    a
  11. Rüther, M.; Fock, J.; Schultz-Krutisch, T.; Bandholtz, T.: Classification and reference vocabulary in linked environment data (2011) 0.07
    0.069777094 = product of:
      0.11629515 = sum of:
        0.014156716 = weight(_text_:a in 4816) [ClassicSimilarity], result of:
          0.014156716 = score(doc=4816,freq=24.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.26478532 = fieldWeight in 4816, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4816)
        0.095440306 = weight(_text_:91 in 4816) [ClassicSimilarity], result of:
          0.095440306 = score(doc=4816,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.3693884 = fieldWeight in 4816, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046875 = fieldNorm(doc=4816)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 4816) [ClassicSimilarity], result of:
              0.013396261 = score(doc=4816,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 4816, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4816)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The Federal Environment Agency (UBA), Germany, has a long tradition in knowledge organization, using a library along with many Web-based information systems. The backbone of this information space is a classification system enhanced by a reference vocabulary which consists of a thesaurus, a gazetteer and a chronicle. Over the years, classification has increasingly been relegated to the background compared with the reference vocabulary indexing and full text search. Bibliographic items are no longer classified directly but tagged with thesaurus terms, with those terms being classified. Since 2010 we have been developing a linked data representation of this knowledge base. While we are linking bibliographic and observation data with the controlled vocabulary in a Resource Desrcription Framework (RDF) representation, the classification may be revisited as a powerful organization system by inference. This also raises questions about the quality and feasibility of an unambiguous classification of thesaurus terms.
    Pages
    S.91-107
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
    Type
    a
  12. Lacasta, J.; Falquet, G.; Nogueras Iso, J.N.; Zarazaga-Soria, J.: ¬A software processing chain for evaluating thesaurus quality (2017) 0.07
    0.06728925 = product of:
      0.11214875 = sum of:
        0.0100103095 = weight(_text_:a in 3485) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=3485,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 3485, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3485)
        0.095440306 = weight(_text_:91 in 3485) [ClassicSimilarity], result of:
          0.095440306 = score(doc=3485,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.3693884 = fieldWeight in 3485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046875 = fieldNorm(doc=3485)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 3485) [ClassicSimilarity], result of:
              0.013396261 = score(doc=3485,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 3485, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3485)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Thesauri are knowledge models commonly used for information classication and retrieval whose structure is dened by standards that describe the main features the concepts and relations must have. However, following these standards requires a deep knowledge of the field the thesaurus is going to cover and experience in their creation. To help in this task, this paper describes a software processing chain that provides dierent validation components that evaluates the quality of the main thesaurus features.
    Pages
    S.91-99
    Series
    Information Systems and Applications, incl. Internet/Web, and HCI; 10151
    Source
    Semantic keyword-based search on structured data sources: COST Action IC1302. Second International KEYSTONE Conference, IKC 2016, Cluj-Napoca, Romania, September 8-9, 2016, Revised Selected Papers. Eds.: A. Calì, A. et al
    Type
    a
  13. Gnoli, C.: Animals belonging to the emperor : enabling viewpoint warrant in classification (2011) 0.07
    0.06635133 = product of:
      0.16587833 = sum of:
        0.0068111527 = weight(_text_:a in 1803) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=1803,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 1803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1803)
        0.15906717 = weight(_text_:91 in 1803) [ClassicSimilarity], result of:
          0.15906717 = score(doc=1803,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.6156473 = fieldWeight in 1803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.078125 = fieldNorm(doc=1803)
      0.4 = coord(2/5)
    
    Pages
    S.91-100
    Type
    a
  14. López-Huertas, M.J.; López-Huertas, M.J.: Epistemological dynamics in scientific domains and their influence in knowledge organization (2010) 0.07
    0.06501 = product of:
      0.10834999 = sum of:
        0.008173384 = weight(_text_:a in 3519) [ClassicSimilarity], result of:
          0.008173384 = score(doc=3519,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 3519, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3519)
        0.095440306 = weight(_text_:91 in 3519) [ClassicSimilarity], result of:
          0.095440306 = score(doc=3519,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.3693884 = fieldWeight in 3519, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046875 = fieldNorm(doc=3519)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 3519) [ClassicSimilarity], result of:
              0.009472587 = score(doc=3519,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 3519, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3519)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Scientific specialties are influenced by socio-cultural contexts. This influence is not homogeneous but, on the contrary, it varies dependying on the specialty. The context can affect the theoretical and epistemological development of a scientific domain and, as a result, it may condition not only a robust, consistent theoretical framework but also good practice. Knowledge organization (KO) should be concerned with this situation and should consider these parameters for knowledge organization systems (KOS) design in order to create structures closer to reality. By doing this, there are possibilities to detect and avoid representing possible epistemological and theoretical biases in KOSs. As an example of what has been said, this paper take two domains: psychiatry and information science with emphasis in KO.
    Pages
    S.91-97
    Type
    a
  15. Brychcín, T.; Konopík, M.: HPS: High precision stemmer (2015) 0.06
    0.056827057 = product of:
      0.09471176 = sum of:
        0.008341924 = weight(_text_:a in 2686) [ClassicSimilarity], result of:
          0.008341924 = score(doc=2686,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 2686, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2686)
        0.079533584 = weight(_text_:91 in 2686) [ClassicSimilarity], result of:
          0.079533584 = score(doc=2686,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.30782366 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2686)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 2686) [ClassicSimilarity], result of:
              0.013672504 = score(doc=2686,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 2686, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Research into unsupervised ways of stemming has resulted, in the past few years, in the development of methods that are reliable and perform well. Our approach further shifts the boundaries of the state of the art by providing more accurate stemming results. The idea of the approach consists in building a stemmer in two stages. In the first stage, a stemming algorithm based upon clustering, which exploits the lexical and semantic information of words, is used to prepare large-scale training data for the second-stage algorithm. The second-stage algorithm uses a maximum entropy classifier. The stemming-specific features help the classifier decide when and how to stem a particular word. In our research, we have pursued the goal of creating a multi-purpose stemming tool. Its design opens up possibilities of solving non-traditional tasks such as approximating lemmas or improving language modeling. However, we still aim at very good results in the traditional task of information retrieval. The conducted tests reveal exceptional performance in all the above mentioned tasks. Our stemming method is compared with three state-of-the-art statistical algorithms and one rule-based algorithm. We used corpora in the Czech, Slovak, Polish, Hungarian, Spanish and English languages. In the tests, our algorithm excels in stemming previously unseen words (the words that are not present in the training set). Moreover, it was discovered that our approach demands very little text data for training when compared with competing unsupervised algorithms.
    Source
    Information processing and management. 51(2015) no.1, S.68-91
    Type
    a
  16. Prichard, J.; Spiranovic, C.; Watters, P.; Lueg, C.: Young people, child pornography, and subcultural norms on the Internet (2013) 0.06
    0.055093452 = product of:
      0.091822416 = sum of:
        0.008341924 = weight(_text_:a in 746) [ClassicSimilarity], result of:
          0.008341924 = score(doc=746,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 746, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=746)
        0.079533584 = weight(_text_:91 in 746) [ClassicSimilarity], result of:
          0.079533584 = score(doc=746,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.30782366 = fieldWeight in 746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0390625 = fieldNorm(doc=746)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 746) [ClassicSimilarity], result of:
              0.007893822 = score(doc=746,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=746)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Literature to date has treated as distinct two issues (a) the influence of pornography on young people and (b) the growth of Internet child pornography, also called child exploitation material (CEM). This article discusses how young people might interact with, and be affected by, CEM. The article first considers the effect of CEM on young victims abused to generate the material. It then explains the paucity of data regarding the prevalence with which young people view CEM online, inadvertently or deliberately. New analyses are presented from a 2010 study of search terms entered on an internationally popular peer-to-peer website, isoHunt. Over 91 days, 162 persistent search terms were recorded. Most of these related to file sharing of popular movies, music, and so forth. Thirty-six search terms were categorized as specific to a youth market and perhaps a child market. Additionally, 4 deviant, and persistent search terms were found, 3 relating to CEM and the fourth to bestiality. The article discusses whether the existence of CEM on a mainstream website, combined with online subcultural influences, may normalize the material for some youth and increase the risk of onset (first deliberate viewing). Among other things, the article proposes that future research examines the relationship between onset and sex offending by youth.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.5, S.992-1000
    Type
    a
  17. Kousha, K.; Thelwall, M.: ¬An automatic method for extracting citations from Google Books (2015) 0.05
    0.054657355 = product of:
      0.09109559 = sum of:
        0.0076151006 = weight(_text_:a in 1658) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=1658,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 1658, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1658)
        0.079533584 = weight(_text_:91 in 1658) [ClassicSimilarity], result of:
          0.079533584 = score(doc=1658,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.30782366 = fieldWeight in 1658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1658)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 1658) [ClassicSimilarity], result of:
              0.007893822 = score(doc=1658,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 1658, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1658)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Recent studies have shown that counting citations from books can help scholarly impact assessment and that Google Books (GB) is a useful source of such citation counts, despite its lack of a public citation index. Searching GB for citations produces approximate matches, however, and so its raw results need time-consuming human filtering. In response, this article introduces a method to automatically remove false and irrelevant matches from GB citation searches in addition to introducing refinements to a previous GB manual citation extraction method. The method was evaluated by manual checking of sampled GB results and comparing citations to about 14,500 monographs in the Thomson Reuters Book Citation Index (BKCI) against automatically extracted citations from GB across 24 subject areas. GB citations were 103% to 137% as numerous as BKCI citations in the humanities, except for tourism (72%) and linguistics (91%), 46% to 85% in social sciences, but only 8% to 53% in the sciences. In all cases, however, GB had substantially more citing books than did BKCI, with BKCI's results coming predominantly from journal articles. Moderate correlations between the GB and BKCI citation counts in social sciences and humanities, with most BKCI results coming from journal articles rather than books, suggests that they could measure the different aspects of impact, however.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.2, S.309-320
    Type
    a
  18. Martínez-Arellano, F.F.; Hernández-Pacheco, F.; Chávez-Hernández, E.: Classification and subject indexing issues at a Mexican library specializing in law research (2019) 0.05
    0.054174986 = product of:
      0.09029164 = sum of:
        0.0068111527 = weight(_text_:a in 5278) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=5278,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 5278, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5278)
        0.079533584 = weight(_text_:91 in 5278) [ClassicSimilarity], result of:
          0.079533584 = score(doc=5278,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.30782366 = fieldWeight in 5278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5278)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 5278) [ClassicSimilarity], result of:
              0.007893822 = score(doc=5278,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 5278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5278)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Subject indexing and classification of law resources is a complex issue due to several factors: specialized meanings of legal terms, meanings across different branches of law, terms in legal systems from diverse countries, and terms in different languages. These issues led to the development of a classification and subject indexing system which will help answer the major challenges of indexing and classifying law resources in the Research Institute Library at the National Autonomous University of Mexico. Adopting its own classification required interdisciplinary work between law and information organization specialists, constant updating by legal specialists and others beyond the Legal Research Institute; and the sharing of this classification system with other institutions. Now, this classification system is used by important institutions that specialize in law, such as the network of Libraries of the Supreme Court of Justice of the Nation of Mexico. The purpose of this article is to show why and how this law classification and subject system was developed and is continuously being updated by libarians and law scholars in order for it to meet their specific needs.
    Source
    Cataloging and classification quarterly. 57(2019) no.2/3, S.91-105
    Type
    a
  19. Scientometrics pioneer Eugene Garfield dies : Eugene Garfield, founder of the Institute for Scientific Information and The Scientist, has passed away at age 91 (2017) 0.05
    0.052445572 = product of:
      0.08740928 = sum of:
        0.004767807 = weight(_text_:a in 3460) [ClassicSimilarity], result of:
          0.004767807 = score(doc=3460,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 3460, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3460)
        0.078734234 = weight(_text_:91 in 3460) [ClassicSimilarity], result of:
          0.078734234 = score(doc=3460,freq=4.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.30472988 = fieldWeight in 3460, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3460)
        0.003907243 = product of:
          0.007814486 = sum of:
            0.007814486 = weight(_text_:information in 3460) [ClassicSimilarity], result of:
              0.007814486 = score(doc=3460,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0960027 = fieldWeight in 3460, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3460)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Vgl. auch Open Password, Nr.167 vom 01.03.2017 :"Eugene Garfield, Begründer und Pionier der Zitationsindexierung und der Ziationsanalyse, ohne den die Informationswissenschaft heute anders aussähe, ist im Alter von 91 Jahren gestorben. Er hinterlässt Frau, drei Söhne, eine Tochter, eine Stieftochter, zwei Enkeltöchter und zwei Großelternkinder. Garfield machte seinen ersten Abschluss als Bachelor in Chemie an der Columbia University in New York City im Jahre 1949. 1954 sattelte er einen Abschluss in Bibliothekswissenschaft drauf. 1961 sollte er im Fach strukturelle Linguistik promovieren. Als Chemie-Student war er nach eigenen Angaben weder besonders gut noch besonders glücklich. Sein "Erweckungserlebnis" hatte er auf einer Tagung der American Chemical Society, als er entdeckte, dass sich mit der Suche nach Literatur womöglich ein Lebensunterhalt bestreiten lasse. "So I went to the Chairman of the meeting and said: "How do you get a job in this racket?" Ab 1955 war Garfield zunächst als Berater für pharmazeutische Unternehmen tätig. Dort spezialisierte er sich auf Fachinformationen, indem er Inhalte relevanter Fachzeitschriften erarbeitete. 1955 schlug er in "Science" seine bahnbrechende Idee vor, Zitationen wissenschaftlicher Veröffentlichungen systematisch zu erfassen und Zusammenhänge zwischen Zitaten deutlich zu machen. 1960 gründete Garfield das Institute für Scientific Informationen, dessen CEO er bis 1992 blieb. 1964 brachte er den Scientific Information Index heraus. Weitere Maßgrößen wie der Social Science Index (ab 1973), der Arts and Humanities Citation Index (ab 1978) und der Journal Citation Index folgten. Diese Verzeichnisse wurden in dem "Web of Science" zusammengefasst und als Datenbank elektronisch zugänglich gemacht. Damit wurde es den Forschern ermöglich, die für sie relevante Literatur "at their fingertips" zu finden und sich in ihr zurechtzufinden. Darüber hinaus wurde es mit Hilfe der Rankings von Garfields Messgrößen möglich, die relative wissenschaftliche Bedeutung wissenschaftlicher Beiträge, Autoren, wissenschaftlicher Einrichtungen, Regionen und Länder zu messen.
    Garfield wandte sich im Zusammenhang mit seinen Messgrößen gegen "Bibliographic Negligence" und "Citation Amnesia", Er schrieb 2002: "There will never be a perfect solution to the problem of acknowledging intellectual debts. But a beginning can be made if journal editors will demand a signed pledge from authors that they have searched Medline, Science Citation Index, or other appropriate print and electronic databases." Er warnte aber auch vor einen unsachgemäßen Umgang mit seinen Messgößen und vor übertriebenen Erwartungen an sie in Zusammenhang mit Karriereentscheidungen über Wissenschaftler und Überlebensentscheidungen für wissenschaftliche Einrichtungen. 1982 übernahm die Thomson Corporation ISI für 210 Millionen Dollar. In der heutigen Nachfolgeorganisation Clarivate Analytics sind mehr als 4000 Mitarbeitern in über hundert Ländern beschäftigt. Garfield gründete auch eine Zeitung für Wissenschaftler, speziell für Biowissenschaftler, "The Scientist", die weiterbesteht und als kostenfreier Pushdienst bezogen werden kann. In seinen Beiträgen zur Wissenschaftspolitik kritisierte er beispielsweise die Wissenschaftsberater von Präsident Reagen 1986 als "Advocats of the administration´s science policies, rather than as objective conduits for communication between the president and the science community." Seinen Beitrag, mit dem er darum warb, die Förderung von UNESCO-Forschungsprogrammen fortzusetzen, gab er den Titel: "Let´s stand up für Global Science". Das ist auch in Trump-Zeiten ein guter Titel, da die US-Regierung den Wahrheitsbegriff, auf der Wissenschaft basiert, als bedeutungslos verwirft und sich auf Nationalismus und Abschottung statt auf internationale Kommunikation, Kooperation und gemeinsame Ausschöpfung von Interessen fokussiert."
  20. Marchionini, G.: Information concepts : from books to cyberspace identities (2010) 0.05
    0.05130396 = product of:
      0.085506596 = sum of:
        0.006092081 = weight(_text_:a in 2) [ClassicSimilarity], result of:
          0.006092081 = score(doc=2,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11394546 = fieldWeight in 2, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2)
        0.06362687 = weight(_text_:91 in 2) [ClassicSimilarity], result of:
          0.06362687 = score(doc=2,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.24625893 = fieldWeight in 2, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.03125 = fieldNorm(doc=2)
        0.015787644 = product of:
          0.03157529 = sum of:
            0.03157529 = weight(_text_:information in 2) [ClassicSimilarity], result of:
              0.03157529 = score(doc=2,freq=50.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.38790947 = fieldWeight in 2, product of:
                  7.071068 = tf(freq=50.0), with freq of:
                    50.0 = termFreq=50.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Information is essential to all human activity, and information in electronic form both amplifies and augments human information interactions. This lecture surveys some of the different classical meanings of information, focuses on the ways that electronic technologies are affecting how we think about these senses of information, and introduces an emerging sense of information that has implications for how we work, play, and interact with others. The evolutions of computers and electronic networks and people's uses and adaptations of these tools manifesting a dynamic space called cyberspace. Our traces of activity in cyberspace give rise to a new sense of information as instantaneous identity states that I term proflection of self. Proflections of self influence how others act toward us. Four classical senses of information are described as context for this new form of information. The four senses selected for inclusion here are the following: thought and memory, communication process, artifact, and energy. Human mental activity and state (thought and memory) have neurological, cognitive, and affective facets.The act of informing (communication process) is considered from the perspective of human intentionality and technical developments that have dramatically amplified human communication capabilities. Information artifacts comprise a common sense of information that gives rise to a variety of information industries. Energy is the most general sense of information and is considered from the point of view of physical, mental, and social state change. This sense includes information theory as a measurable reduction in uncertainty. This lecture emphasizes how electronic representations have blurred media boundaries and added computational behaviors that yield new forms of information interaction, which, in turn, are stored, aggregated, and mined to create profiles that represent our cyber identities.
    Content
    Table of Contents: The Many Meanings of Information / Information as Thought and Memory / Information as Communication Process / Information as Artifact / Information as Energy / Information as Identity in Cyberspace: The Fifth Voice / Conclusion and Directions
    Pages
    IX, 91 S
    RSWK
    Information
    Series
    Synthesis lectures on information concepts, retrieval, and services ; 16
    Subject
    Information

Types

  • a 4068
  • el 293
  • m 221
  • s 76
  • x 19
  • n 10
  • r 8
  • b 7
  • i 4
  • ag 2
  • p 1
  • More… Less…

Themes

Subjects

Classifications