Search (4248 results, page 1 of 213)

  1. Vizine-Goetz, D.: Dewey research : new uses for the DDC (2001) 0.21
    0.20632385 = product of:
      0.4126477 = sum of:
        0.4126477 = sum of:
          0.33466226 = weight(_text_:categorized in 190) [ClassicSimilarity], result of:
            0.33466226 = score(doc=190,freq=2.0), product of:
              0.41755152 = queryWeight, product of:
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.057559684 = queryNorm
              0.8014873 = fieldWeight in 190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.078125 = fieldNorm(doc=190)
          0.07798543 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
            0.07798543 = score(doc=190,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.38690117 = fieldWeight in 190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=190)
      0.5 = coord(1/2)
    
    Abstract
    Bericht über verschiedene Forschungsprojekte bei OCLC im Umfeld der DDC (Renardus, DDC in the Collaborative Digital Reference Service (CDRS), OverView (Information Visualization using Dewey, NetFirst results categorized by Dewey)
    Date
    22. 6.2002 19:32:34
  2. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.14
    0.14442669 = product of:
      0.28885338 = sum of:
        0.28885338 = sum of:
          0.23426357 = weight(_text_:categorized in 2673) [ClassicSimilarity], result of:
            0.23426357 = score(doc=2673,freq=2.0), product of:
              0.41755152 = queryWeight, product of:
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.057559684 = queryNorm
              0.5610411 = fieldWeight in 2673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2673)
          0.054589797 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
            0.054589797 = score(doc=2673,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.2708308 = fieldWeight in 2673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2673)
      0.5 = coord(1/2)
    
    Abstract
    Examines techniques that discover features in sets of pre-categorized documents, such that similar documents can be found on the WWW. Examines techniques which will classifiy training examples with high accuracy, then explains why this is not necessarily useful. Describes a method for extracting word clusters from the raw document features. Results show that the clustering technique is successful in discovering word groups in personal Web pages which can be used to find similar information on the WWW
    Date
    1. 8.1996 22:08:06
  3. Taniguchi, S.: Recording evidence in bibliographic records and descriptive metadata (2005) 0.12
    0.12379431 = product of:
      0.24758862 = sum of:
        0.24758862 = sum of:
          0.20079736 = weight(_text_:categorized in 3565) [ClassicSimilarity], result of:
            0.20079736 = score(doc=3565,freq=2.0), product of:
              0.41755152 = queryWeight, product of:
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.057559684 = queryNorm
              0.48089242 = fieldWeight in 3565, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.046875 = fieldNorm(doc=3565)
          0.046791255 = weight(_text_:22 in 3565) [ClassicSimilarity], result of:
            0.046791255 = score(doc=3565,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.23214069 = fieldWeight in 3565, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3565)
      0.5 = coord(1/2)
    
    Abstract
    In this article recording evidence for data values in addition to the values themselves in bibliographic records and descriptive metadata is proposed, with the aim of improving the expressiveness and reliability of those records and metadata. Recorded evidence indicates why and how data values are recorded for elements. Recording the history of changes in data values is also proposed, with the aim of reinforcing recorded evidence. First, evidence that can be recorded is categorized into classes: identifiers of rules or tasks, action descriptions of them, and input and output data of them. Dates of recording values and evidence are an additional class. Then, the relative usefulness of evidence classes and also levels (i.e., the record, data element, or data value level) to which an individual evidence class is applied, is examined. Second, examples that can be viewed as recorded evidence in existing bibliographic records and current cataloging rules are shown. Third, some examples of bibliographic records and descriptive metadata with notes of evidence are demonstrated. Fourth, ways of using recorded evidence are addressed.
    Date
    18. 6.2005 13:16:22
  4. Stapleton, M.; Adams, M.: Faceted categorisation for the corporate desktop : visualisation and interaction using metadata to enhance user experience (2007) 0.12
    0.12379431 = product of:
      0.24758862 = sum of:
        0.24758862 = sum of:
          0.20079736 = weight(_text_:categorized in 718) [ClassicSimilarity], result of:
            0.20079736 = score(doc=718,freq=2.0), product of:
              0.41755152 = queryWeight, product of:
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.057559684 = queryNorm
              0.48089242 = fieldWeight in 718, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.046875 = fieldNorm(doc=718)
          0.046791255 = weight(_text_:22 in 718) [ClassicSimilarity], result of:
            0.046791255 = score(doc=718,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.23214069 = fieldWeight in 718, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=718)
      0.5 = coord(1/2)
    
    Abstract
    Mark Stapleton and Matt Adamson began their presentation by describing how Dow Jones' Factiva range of information services processed an average of 170,000 documents every day, drawn from over 10,000 sources in 22 languages. These documents are categorized within five facets: Company, Subject, Industry, Region and Language. The digital feeds received from information providers undergo a series of processing stages, initially to prepare them for automatic categorization and then to format them ready for distribution. The categorization stage is able to handle 98% of documents automatically, the remaining 2% requiring some form of human intervention. Depending on the source, categorization can involve any combination of 'Autocoding', 'Dictionary-based Categorizing', 'Rules-based Coding' or 'Manual Coding'
  5. Hudon, M.: KO and classification education in the light of Benjamin Bloom's Taxonomy of learning objectives (2014) 0.12
    0.12379431 = product of:
      0.24758862 = sum of:
        0.24758862 = sum of:
          0.20079736 = weight(_text_:categorized in 1468) [ClassicSimilarity], result of:
            0.20079736 = score(doc=1468,freq=2.0), product of:
              0.41755152 = queryWeight, product of:
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.057559684 = queryNorm
              0.48089242 = fieldWeight in 1468, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.046875 = fieldNorm(doc=1468)
          0.046791255 = weight(_text_:22 in 1468) [ClassicSimilarity], result of:
            0.046791255 = score(doc=1468,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.23214069 = fieldWeight in 1468, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1468)
      0.5 = coord(1/2)
    
    Abstract
    In a research project focusing on knowledge organization and classification education, 407 learning objectives proposed in courses entirely or partially dedicated to these subjects in North American Library and Information Science programs were categorized with the help of the Benjamin Bloom's Taxonomy of cognitive objectives. The analysis reveals that the vast majority of course objectives remain at the lower levels of the Taxonomy. These results tend to reinforce observations made over the past 30 years in relation to KO and classification education. While KO and classification educators recognize the necessity for students to develop high-level analytic and evaluative skills, there are few references to those skills in current course objectives.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  6. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.11481567 = sum of:
      0.09142004 = product of:
        0.2742601 = sum of:
          0.2742601 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2742601 = score(doc=562,freq=2.0), product of:
              0.48799163 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.057559684 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.023395628 = product of:
        0.046791255 = sum of:
          0.046791255 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.046791255 = score(doc=562,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  7. Bourouni , A.; Noori, S.; Jafari, M.: Knowledge network creation methodology selection in project-based organizations : an empirical framework (2015) 0.11
    0.11123758 = product of:
      0.22247516 = sum of:
        0.22247516 = sum of:
          0.16733113 = weight(_text_:categorized in 1629) [ClassicSimilarity], result of:
            0.16733113 = score(doc=1629,freq=2.0), product of:
              0.41755152 = queryWeight, product of:
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.057559684 = queryNorm
              0.40074366 = fieldWeight in 1629, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1629)
          0.055144023 = weight(_text_:22 in 1629) [ClassicSimilarity], result of:
            0.055144023 = score(doc=1629,freq=4.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.27358043 = fieldWeight in 1629, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1629)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - In today's knowledge-based economy, knowledge networks (KN) increasingly are becoming vital channels for pursuing strategic objectives in project-based organizations (PBO), in which the project is the basic organizational element in its operation. KN initiatives often are started with the selection of a creation methodology, which involves complex decisions for successful implementation. Thus, the purpose of this paper is to address this critical selection of methodology and proposes a holistic framework for selecting an appropriate methodology in this kind of flatter, speedier, and more flexible organizational form. Design/methodology/approach - In the first step, the study established a theoretical background addressing the problem of KN creation in PBO. The second step defined selection criteria based on extensive literature review. In the third step, a holistic framework was constructed based on different characteristics of existing methodologies categorized according to the selected criteria. Finally, the suggested framework was empirically tested in a project-based firm and the case study and the results are discussed. Findings - A holistic framework was determined by including different aspects of a KN such as network perspectives, tools and techniques, objectives, characteristics, capabilities, and approaches. The proposed framework consisted of ten existing KN methodologies that consider qualitative and quantitative dimensions with micro and macro approaches. Originality/value - The development of the theory of KN creation methodology is the main contribution of this research. The selection framework, which was theoretically and empirically grounded, has attempted to offer a more rational and less ambiguous solution to the KN methodology selection problem in PBO forms.
    Date
    20. 1.2015 18:30:22
    18. 9.2018 16:27:22
  8. Dick, S.J.: Astronomy's Three Kingdom System : a comprehensive classification system of celestial objects (2019) 0.11
    0.111134335 = sum of:
      0.08383944 = product of:
        0.2515183 = sum of:
          0.2515183 = weight(_text_:objects in 5455) [ClassicSimilarity], result of:
            0.2515183 = score(doc=5455,freq=8.0), product of:
              0.3059338 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.057559684 = queryNorm
              0.82213306 = fieldWeight in 5455, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5455)
        0.33333334 = coord(1/3)
      0.027294898 = product of:
        0.054589797 = sum of:
          0.054589797 = weight(_text_:22 in 5455) [ClassicSimilarity], result of:
            0.054589797 = score(doc=5455,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.2708308 = fieldWeight in 5455, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5455)
        0.5 = coord(1/2)
    
    Abstract
    Although classification has been an important aspect of astronomy since stellar spectroscopy in the late nineteenth century, to date no comprehensive classification system has existed for all classes of objects in the universe. Here we present such a system, and lay out its foundational definitions and principles. The system consists of the "Three Kingdoms" of planets, stars and galaxies, eighteen families, and eighty-two classes of objects. Gravitation is the defining organizing principle for the families and classes, and the physical nature of the objects is the defining characteristic of the classes. The system should prove useful for both scientific and pedagogical purposes.
    Date
    21.11.2019 18:46:22
  9. Kules, B.; Shneiderman, B.: Users can change their web search tactics : design guidelines for categorized overviews (2008) 0.11
    0.11067914 = product of:
      0.22135828 = sum of:
        0.22135828 = product of:
          0.44271657 = sum of:
            0.44271657 = weight(_text_:categorized in 2044) [ClassicSimilarity], result of:
              0.44271657 = score(doc=2044,freq=14.0), product of:
                0.41755152 = queryWeight, product of:
                  7.2542357 = idf(docFreq=84, maxDocs=44218)
                  0.057559684 = queryNorm
                1.0602682 = fieldWeight in 2044, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  7.2542357 = idf(docFreq=84, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2044)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Categorized overviews of web search results are a promising way to support user exploration, understanding, and discovery. These search interfaces combine a metadata-based overview with the list of search results to enable a rich form of interaction. A study of 24 sophisticated users carrying out complex tasks suggests how searchers may adapt their search tactics when using categorized overviews. This mixed methods study evaluated categorized overviews of web search results organized into thematic, geographic, and government categories. Participants conducted four exploratory searches during a 2-hour session to generate ideas for newspaper articles about specified topics such as "human smuggling." Results showed that subjects explored deeper while feeling more organized, and that the categorized overview helped subjects better assess their results, although no significant differences were detected in the quality of the article ideas. A qualitative analysis of searcher comments identified seven tactics that participants reported adopting when using categorized overviews. This paper concludes by proposing a set of guidelines for the design of exploratory search interfaces. An understanding of the impact of categorized overviews on search tactics will be useful to web search researchers, search interface designers, information architects and web developers.
  10. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.11
    0.106126025 = product of:
      0.21225205 = sum of:
        0.21225205 = product of:
          0.31837806 = sum of:
            0.08982797 = weight(_text_:objects in 76) [ClassicSimilarity], result of:
              0.08982797 = score(doc=76,freq=2.0), product of:
                0.3059338 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057559684 = queryNorm
                0.29361898 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
            0.22855009 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.22855009 = score(doc=76,freq=2.0), product of:
                0.48799163 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.057559684 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    A summary of brain theory is given so far as it is contained within the framework of Localization Theory. Difficulties of this "conventional theory" are traced back to a specific deficiency: there is no way to express relations between active cells (as for instance their representing parts of the same object). A new theory is proposed to cure this deficiency. It introduces a new kind of dynamical control, termed synaptic modulation, according to which synapses switch between a conducting and a non- conducting state. The dynamics of this variable is controlled on a fast time scale by correlations in the temporal fine structure of cellular signals. Furthermore, conventional synaptic plasticity is replaced by a refined version. Synaptic modulation and plasticity form the basis for short-term and long-term memory, respectively. Signal correlations, shaped by the variable network, express structure and relationships within objects. In particular, the figure-ground problem may be solved in this way. Synaptic modulation introduces exibility into cerebral networks which is necessary to solve the invariance problem. Since momentarily useless connections are deactivated, interference between di erent memory traces can be reduced, and memory capacity increased, in comparison with conventional associative memory
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  11. Westman, S.; Laine-Hernandez, M.; Oittinen, P.: Development and evaluation of a multifaceted magazine image categorization model (2011) 0.10
    0.10316192 = product of:
      0.20632385 = sum of:
        0.20632385 = sum of:
          0.16733113 = weight(_text_:categorized in 4193) [ClassicSimilarity], result of:
            0.16733113 = score(doc=4193,freq=2.0), product of:
              0.41755152 = queryWeight, product of:
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.057559684 = queryNorm
              0.40074366 = fieldWeight in 4193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4193)
          0.038992714 = weight(_text_:22 in 4193) [ClassicSimilarity], result of:
            0.038992714 = score(doc=4193,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.19345059 = fieldWeight in 4193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4193)
      0.5 = coord(1/2)
    
    Abstract
    The development of visual retrieval methods requires information about user interaction with images, including their description and categorization. This article presents the development of a categorization model for magazine images based on two user studies. In Study 1, we elicited 10 main classes of magazine image categorization criteria through sorting tasks with nonexpert and expert users (N=30). Multivariate methods, namely, multidimensional scaling and hierarchical clustering, were used to analyze similarity data. Content analysis of category names gave rise to classes that were synthesized into a categorization framework. The framework was evaluated in Study 2 by experts (N=24) who categorized another set of images consistent with the framework and found it to be useful in the task. Based on the evaluation study the framework was solidified into a model for categorizing magazine imagery. Connections between classes were analyzed both from the original sorting data and from the evaluation study and included into the final model. The model is a practical categorization tool that may be used in workplaces, such as magazine editorial offices. It may also serve to guide the development of computational methods for image understanding, selection of concepts for automatic detection, and approaches to support browsing and exploratory image search.
    Date
    22. 1.2011 14:09:26
  12. Graf, A.M.; Smiraglia, R.P.: Race & ethnicity in the Encyclopedia of Milwaukee : a case study in the use of domain analysis (2014) 0.10
    0.10316192 = product of:
      0.20632385 = sum of:
        0.20632385 = sum of:
          0.16733113 = weight(_text_:categorized in 1412) [ClassicSimilarity], result of:
            0.16733113 = score(doc=1412,freq=2.0), product of:
              0.41755152 = queryWeight, product of:
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.057559684 = queryNorm
              0.40074366 = fieldWeight in 1412, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1412)
          0.038992714 = weight(_text_:22 in 1412) [ClassicSimilarity], result of:
            0.038992714 = score(doc=1412,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.19345059 = fieldWeight in 1412, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1412)
      0.5 = coord(1/2)
    
    Abstract
    Scholarly domains have been analyzed using various tools and techniques to reveal complex genealogies of scholarship, authorship, citation and ontology, resulting in not only deeper knowledge of each area studied, but in a better developed set of methodologies for domain exploration in general. While domain analysis itself is being used frequently in LIS, there remain many areas against which domain analytical tools have not yet been applied. This is the case with encyclopedic collections of knowledge, such as that which is being developed as the Encyclopedia of Milwaukee (EMKE) within the history department at the University of Wisconsin-Milwaukee. This descriptive study will analyze resources categorized under race and ethnicity from a comprehensive bibliography on the history of metropolitan Milwaukee that was designed to serve those who would research and write entries for the EMKE. Bibliometric and analytic techniques are employed to explore the intension and extension of the domain as it is developing.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  13. Jervis, M.; Masoodian, M.: How do people attempt to integrate the management of their paper and electronic documents? (2014) 0.10
    0.10316192 = product of:
      0.20632385 = sum of:
        0.20632385 = sum of:
          0.16733113 = weight(_text_:categorized in 1632) [ClassicSimilarity], result of:
            0.16733113 = score(doc=1632,freq=2.0), product of:
              0.41755152 = queryWeight, product of:
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.057559684 = queryNorm
              0.40074366 = fieldWeight in 1632, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1632)
          0.038992714 = weight(_text_:22 in 1632) [ClassicSimilarity], result of:
            0.038992714 = score(doc=1632,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.19345059 = fieldWeight in 1632, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1632)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This article aims to describe how people manage to integrate their use of paper and electronic documents in modern office work environments. Design/methodology/approach - An observational interview type study of 14 participants from 11 offices in eight organizations was conducted. Recorded data were analysed using a thematic analysis method. This involved reading and annotation of interview transcripts, categorizing, linking and connecting, corroborating, and producing an account of the study. Findings - The findings of the study can be categorized into four groups: the roles paper and electronic documents serve in today's offices, the ways in which these documents are managed, the problems associated with their management, and the types of fragmentation that exist in terms of their management and how these are dealt with. Practical implications - The study has identified the need for better integrated management of paper and electronic documents in present-day offices. The findings of the study have then been used to propose a set of guidelines for the development of integrated paper and electronic document management systems. Originality/value - Although similar studies of offices have been conducted in the past, almost all of these studies are prior to the widespread use of mobile and network-based shared technologies in office environments. Furthermore, previous studies have generally failed to identify and propose guidelines for integration of paper and electronic document management systems.
    Date
    20. 1.2015 18:30:22
  14. Choi, Y.; Syn, S.Y.: Characteristics of tagging behavior in digitized humanities online collections (2016) 0.10
    0.10316192 = product of:
      0.20632385 = sum of:
        0.20632385 = sum of:
          0.16733113 = weight(_text_:categorized in 2891) [ClassicSimilarity], result of:
            0.16733113 = score(doc=2891,freq=2.0), product of:
              0.41755152 = queryWeight, product of:
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.057559684 = queryNorm
              0.40074366 = fieldWeight in 2891, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2891)
          0.038992714 = weight(_text_:22 in 2891) [ClassicSimilarity], result of:
            0.038992714 = score(doc=2891,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.19345059 = fieldWeight in 2891, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2891)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this study was to examine user tags that describe digitized archival collections in the field of humanities. A collection of 8,310 tags from a digital portal (Nineteenth-Century Electronic Scholarship, NINES) was analyzed to find out what attributes of primary historical resources users described with tags. Tags were categorized to identify which tags describe the content of the resource, the resource itself, and subjective aspects (e.g., usage or emotion). The study's findings revealed that over half were content-related; tags representing opinion, usage context, or self-reference, however, reflected only a small percentage. The study further found that terms related to genre or physical format of a resource were frequently used in describing primary archival resources. It was also learned that nontextual resources had lower numbers of content-related tags and higher numbers of document-related tags than textual resources and bibliographic materials; moreover, textual resources tended to have more user-context-related tags than other resources. These findings help explain users' tagging behavior and resource interpretation in primary resources in the humanities. Such information provided through tags helps information professionals decide to what extent indexing archival and cultural resources should be done for resource description and discovery, and understand users' terminology.
    Date
    21. 4.2016 11:23:22
  15. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.10
    0.10316192 = product of:
      0.20632385 = sum of:
        0.20632385 = sum of:
          0.16733113 = weight(_text_:categorized in 1012) [ClassicSimilarity], result of:
            0.16733113 = score(doc=1012,freq=2.0), product of:
              0.41755152 = queryWeight, product of:
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.057559684 = queryNorm
              0.40074366 = fieldWeight in 1012, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.2542357 = idf(docFreq=84, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1012)
          0.038992714 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
            0.038992714 = score(doc=1012,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.19345059 = fieldWeight in 1012, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1012)
      0.5 = coord(1/2)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
  16. Proffitt, M.: Pulling it all together : use of METS in RLG cultural materials service (2004) 0.10
    0.09894667 = sum of:
      0.067752495 = product of:
        0.20325749 = sum of:
          0.20325749 = weight(_text_:objects in 767) [ClassicSimilarity], result of:
            0.20325749 = score(doc=767,freq=4.0), product of:
              0.3059338 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.057559684 = queryNorm
              0.6643839 = fieldWeight in 767, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0625 = fieldNorm(doc=767)
        0.33333334 = coord(1/3)
      0.031194171 = product of:
        0.062388342 = sum of:
          0.062388342 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
            0.062388342 = score(doc=767,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.30952093 = fieldWeight in 767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=767)
        0.5 = coord(1/2)
    
    Abstract
    RLG has used METS for a particular application, that is as a wrapper for structural metadata. When RLG cultural materials was launched, there was no single way to deal with "complex digital objects". METS provides a standard means of encoding metadata regarding the digital objects represented in RCM, and METS has now been fully integrated into the workflow for this service.
    Source
    Library hi tech. 22(2004) no.1, S.65-68
  17. Johnson, E.H.: Using IODyne : Illustrations and examples (1998) 0.10
    0.09894667 = sum of:
      0.067752495 = product of:
        0.20325749 = sum of:
          0.20325749 = weight(_text_:objects in 2341) [ClassicSimilarity], result of:
            0.20325749 = score(doc=2341,freq=4.0), product of:
              0.3059338 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.057559684 = queryNorm
              0.6643839 = fieldWeight in 2341, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0625 = fieldNorm(doc=2341)
        0.33333334 = coord(1/3)
      0.031194171 = product of:
        0.062388342 = sum of:
          0.062388342 = weight(_text_:22 in 2341) [ClassicSimilarity], result of:
            0.062388342 = score(doc=2341,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.30952093 = fieldWeight in 2341, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2341)
        0.5 = coord(1/2)
    
    Abstract
    IODyone is an Internet client program that allows one to retriev information from servers by dynamically combining information objects. Information objects are abstract representations of bibliographic data, typically titles (or title keywords), author names, subject and classification identifiers, and full-text search terms
    Date
    22. 9.1997 19:16:05
  18. Holetschek, J. et al.: Natural history in Europeana : accessing scientific collection objects via LOD (2016) 0.10
    0.098878026 = sum of:
      0.059885316 = product of:
        0.17965594 = sum of:
          0.17965594 = weight(_text_:objects in 3277) [ClassicSimilarity], result of:
            0.17965594 = score(doc=3277,freq=2.0), product of:
              0.3059338 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.057559684 = queryNorm
              0.58723795 = fieldWeight in 3277, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.078125 = fieldNorm(doc=3277)
        0.33333334 = coord(1/3)
      0.038992714 = product of:
        0.07798543 = sum of:
          0.07798543 = weight(_text_:22 in 3277) [ClassicSimilarity], result of:
            0.07798543 = score(doc=3277,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.38690117 = fieldWeight in 3277, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3277)
        0.5 = coord(1/2)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  19. Fachsystematik Bremen nebst Schlüssel 1970 ff. (1970 ff) 0.10
    0.09567972 = sum of:
      0.076183364 = product of:
        0.22855009 = sum of:
          0.22855009 = weight(_text_:3a in 3577) [ClassicSimilarity], result of:
            0.22855009 = score(doc=3577,freq=2.0), product of:
              0.48799163 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.057559684 = queryNorm
              0.46834838 = fieldWeight in 3577, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3577)
        0.33333334 = coord(1/3)
      0.019496357 = product of:
        0.038992714 = sum of:
          0.038992714 = weight(_text_:22 in 3577) [ClassicSimilarity], result of:
            0.038992714 = score(doc=3577,freq=2.0), product of:
              0.20156421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057559684 = queryNorm
              0.19345059 = fieldWeight in 3577, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3577)
        0.5 = coord(1/2)
    
    Content
    1. Agrarwissenschaften 1981. - 3. Allgemeine Geographie 2.1972. - 3a. Allgemeine Naturwissenschaften 1.1973. - 4. Allgemeine Sprachwissenschaft, Allgemeine Literaturwissenschaft 2.1971. - 6. Allgemeines. 5.1983. - 7. Anglistik 3.1976. - 8. Astronomie, Geodäsie 4.1977. - 12. bio Biologie, bcp Biochemie-Biophysik, bot Botanik, zoo Zoologie 1981. - 13. Bremensien 3.1983. - 13a. Buch- und Bibliothekswesen 3.1975. - 14. Chemie 4.1977. - 14a. Elektrotechnik 1974. - 15 Ethnologie 2.1976. - 16,1. Geowissenschaften. Sachteil 3.1977. - 16,2. Geowissenschaften. Regionaler Teil 3.1977. - 17. Germanistik 6.1984. - 17a,1. Geschichte. Teilsystematik hil. - 17a,2. Geschichte. Teilsystematik his Neuere Geschichte. - 17a,3. Geschichte. Teilsystematik hit Neueste Geschichte. - 18. Humanbiologie 2.1983. - 19. Ingenieurwissenschaften 1974. - 20. siehe 14a. - 21. klassische Philologie 3.1977. - 22. Klinische Medizin 1975. - 23. Kunstgeschichte 2.1971. - 24. Kybernetik. 2.1975. - 25. Mathematik 3.1974. - 26. Medizin 1976. - 26a. Militärwissenschaft 1985. - 27. Musikwissenschaft 1978. - 27a. Noten 2.1974. - 28. Ozeanographie 3.1977. -29. Pädagogik 8.1985. - 30. Philosphie 3.1974. - 31. Physik 3.1974. - 33. Politik, Politische Wissenschaft, Sozialwissenschaft. Soziologie. Länderschlüssel. Register 1981. - 34. Psychologie 2.1972. - 35. Publizistik und Kommunikationswissenschaft 1985. - 36. Rechtswissenschaften 1986. - 37. Regionale Geograpgie 3.1975. - 37a. Religionswissenschaft 1970. - 38. Romanistik 3.1976. - 39. Skandinavistik 4.1985. - 40. Slavistik 1977. - 40a. Sonstige Sprachen und Literaturen 1973. - 43. Sport 4.1983. - 44. Theaterwissenschaft 1985. - 45. Theologie 2.1976. - 45a. Ur- und Frühgeschichte, Archäologie 1970. - 47. Volkskunde 1976. - 47a. Wirtschaftswissenschaften 1971 // Schlüssel: 1. Länderschlüssel 1971. - 2. Formenschlüssel (Kurzform) 1974. - 3. Personenschlüssel Literatur 5. Fassung 1968
  20. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.09
    0.09142004 = product of:
      0.18284008 = sum of:
        0.18284008 = product of:
          0.5485202 = sum of:
            0.5485202 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.5485202 = score(doc=973,freq=2.0), product of:
                0.48799163 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.057559684 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.

Languages

Types

  • a 3574
  • m 382
  • el 218
  • s 164
  • x 40
  • b 39
  • i 23
  • r 23
  • ? 8
  • n 4
  • p 4
  • d 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications