Search (676 results, page 1 of 34)

  • × language_ss:"e"
  • × year_i:[2000 TO 2010}
  1. Murphy, M.L.: Semantic relations and the lexicon : antonymy, synonymy and other paradigms (2008) 0.10
    0.10076965 = product of:
      0.2015393 = sum of:
        0.2015393 = sum of:
          0.16769923 = weight(_text_:400 in 997) [ClassicSimilarity], result of:
            0.16769923 = score(doc=997,freq=4.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.5121268 = fieldWeight in 997, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=997)
          0.03384006 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
            0.03384006 = score(doc=997,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.19345059 = fieldWeight in 997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=997)
      0.5 = coord(1/2)
    
    Classification
    ET 400 (BVB)
    Date
    22. 7.2013 10:53:30
    RVK
    ET 400 (BVB)
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.09964347 = sum of:
      0.07933943 = product of:
        0.23801827 = sum of:
          0.23801827 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23801827 = score(doc=562,freq=2.0), product of:
              0.42350647 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.049953517 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020304035 = product of:
        0.04060807 = sum of:
          0.04060807 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.04060807 = score(doc=562,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Sehgal, R.L.: ¬An introduction to Dewey Decimal Classification (2005) 0.08
    0.08321917 = product of:
      0.16643834 = sum of:
        0.16643834 = sum of:
          0.11858127 = weight(_text_:400 in 1467) [ClassicSimilarity], result of:
            0.11858127 = score(doc=1467,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.36212835 = fieldWeight in 1467, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1467)
          0.047857072 = weight(_text_:22 in 1467) [ClassicSimilarity], result of:
            0.047857072 = score(doc=1467,freq=4.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.27358043 = fieldWeight in 1467, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1467)
      0.5 = coord(1/2)
    
    Content
    Inhalt: Section A: Number Building in Dewey Decimal Classification Chapters 1. Dewey Decimal Classification: An Introduction 2. Relative Index and its Utility 3. Table 1: Standard Subdivisions 4. Table 2: Areas 5. Table 3: Subdivisions of Individual Literature 6. Table 4: Aubdivisions of Individual Languages 7. Table 5: Racial, Ethnic National Groups 8. Table 6: Languages 9. Table 7: Persons 10. Number Building in Dewey Decimal Classification 11. Classification of Books According to Dewey Decimal classification 12. 000 Generalities 13. 100 Philosophy and Related Disciplines 14. 200 Religion 15. 300 Social Sciences 16. 400 Languages 17. 500 Pure Sciences 18. 600 Technology (Applied Sciences) 19. 700 The Arts 20. 800 Literature (Belles-Relaters) 21. 900 General Geography and History Exercises Solutions
    Date
    28. 2.2008 17:22:52
    Object
    DDC-22
  4. Moens, M.F.: Automatic indexing and abstracting of document texts (2000) 0.06
    0.059290636 = product of:
      0.11858127 = sum of:
        0.11858127 = product of:
          0.23716255 = sum of:
            0.23716255 = weight(_text_:400 in 6892) [ClassicSimilarity], result of:
              0.23716255 = score(doc=6892,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.7242567 = fieldWeight in 6892, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6892)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    400 S
  5. Garfield, E.; Pudovkin, A.I.; Istomin, V.S.: Why do we need algorithmic historiography? (2003) 0.05
    0.04743251 = product of:
      0.09486502 = sum of:
        0.09486502 = product of:
          0.18973003 = sum of:
            0.18973003 = weight(_text_:400 in 1606) [ClassicSimilarity], result of:
              0.18973003 = score(doc=1606,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.57940537 = fieldWeight in 1606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1606)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.5, S.400-412
  6. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.05
    0.046281334 = product of:
      0.09256267 = sum of:
        0.09256267 = product of:
          0.277688 = sum of:
            0.277688 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.277688 = score(doc=306,freq=2.0), product of:
                0.42350647 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049953517 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  7. Garfield, E.; Paris, S.W.; Stock, W.G.: HistCite(TM) : a software tool for informetric analysis of citation linkage (2006) 0.04
    0.041503448 = product of:
      0.083006896 = sum of:
        0.083006896 = product of:
          0.16601379 = sum of:
            0.16601379 = weight(_text_:400 in 79) [ClassicSimilarity], result of:
              0.16601379 = score(doc=79,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.5069797 = fieldWeight in 79, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=79)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.8, S.391-400
  8. Beghtol, C.: ¬The Iter Bibliography : International standard subject access to medieval and renaissance materials (400-1700) (2003) 0.04
    0.04107776 = product of:
      0.08215552 = sum of:
        0.08215552 = product of:
          0.16431104 = sum of:
            0.16431104 = weight(_text_:400 in 3965) [ClassicSimilarity], result of:
              0.16431104 = score(doc=3965,freq=6.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.5017798 = fieldWeight in 3965, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3965)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Iter ("journey" or "path" in Latin) is a non-profit project for providing electronic access to materials pertaining to the Middle Ages and Renaissance (400-1700). Iter's background is described, and its centrepiece, the Iter Bibliography, is explicated. Emphasis is an the subject cataloguing process and an subject access to records for journal articles (using Library of Congress Subject Headings and the Dewey Decimal Classification). Basic subject analysis of the materials is provided by graduate students specializing in the Middle Ages and Renaissance periods, and, subsequently, subject access points systems are provided by information professionals. This close cooperation between subject and information experts would not be efficient without electronic capabilities.
    Content
    "1. Iter: Gateway to the Middle Ages and Renaissance Iter is a non-profit research project dedicated to providing electronic access to all kinds and formats of materials pertaining to the Middle Ages and Renaissance (400-1700). Iter began in 1995 as a joint initiative of the Renaissance Society of America (RSA) in New York City and the Centre for Reformation and Renaissance Studies (CRRS), Univ. of Toronto. By 1997, three more partners had joined: Faculty of Information Studies (FIS), Univ. of Toronto; Arizona Center for Medieval and Renaissance Studies (ACMRS), Arizona State Univ. at Tempe; and John P. Robarts Library, Univ. of Toronto. Iter was funded initially by the five partners and major foundations and, since 1998, has offered low-cost subscriptions to institutions and individuals. When Iter becomes financially self-sufficient, any profits will be used to enhance and expand the project. Iter databases are housed and maintained at the John P. Robarts Library. The interface is a customized version of DRA WebZ. A new interface using DRA Web can be searched now and will replace the DRA WebZ interface shortly. Iter was originally conceived as a comprehensive bibliography of secondary materials that would be an alternative to the existing commercial research tools for its period. These were expensive, generally appeared several years late, had limited subject indexing, were inconsistent in coverage, of uneven quality, and often depended an fragile networks of volunteers for identification of materials. The production of a reasonably priced, web-based, timely research tool was Iter's first priority. In addition, the partners wanted to involve graduate students in the project in order to contribute to the scholarly training and financial support of future scholars of the Middle Ages and Renaissance and to utilize as much automation as possible."
  9. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.04
    0.039669715 = product of:
      0.07933943 = sum of:
        0.07933943 = product of:
          0.23801827 = sum of:
            0.23801827 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.23801827 = score(doc=2918,freq=2.0), product of:
                0.42350647 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049953517 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  10. Hawking, D.; Robertson, S.: On collection size and retrieval effectiveness (2003) 0.04
    0.038285658 = product of:
      0.076571316 = sum of:
        0.076571316 = product of:
          0.15314263 = sum of:
            0.15314263 = weight(_text_:22 in 4109) [ClassicSimilarity], result of:
              0.15314263 = score(doc=4109,freq=4.0), product of:
                0.17492871 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049953517 = queryNorm
                0.8754574 = fieldWeight in 4109, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4109)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2005 14:22:22
  11. Beghtol, C.: Knowledge representation and organization in the ITER project : A Web-based digital library for scholars of the middle ages and renaissance (http://iter.utoronto.ca) (2001) 0.04
    0.035574384 = product of:
      0.07114877 = sum of:
        0.07114877 = product of:
          0.14229754 = sum of:
            0.14229754 = weight(_text_:400 in 638) [ClassicSimilarity], result of:
              0.14229754 = score(doc=638,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.43455404 = fieldWeight in 638, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.046875 = fieldNorm(doc=638)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Iter Project ("iter" means "path" or "journey" in Latin) is an internationally supported non-profit research project created with the objective of providing electronic access to all kinds and formats of materials that relate to the Middle Ages and Renaissance (400-1700) and that were published between 1700 and the present. Knowledge representation and organization decisions for the Project were influenced by its potential international clientele of scholarly users, and these decisions illustrate the importance and efficacy of collaboration between specialized users and information professionals. The paper outlines the scholarly principles and information goals of the Project and describes in detail the methodology developed to provide reliable and consistent knowledge representation and organization for one component of the Project, the Iter Bibliography. Examples of fully catalogued records for the Iter Bibliography are included.
  12. Jansen, B.J.; Booth, D.L.; Spink, A.: Determining the informational, navigational, and transactional intent of Web queries (2008) 0.04
    0.035574384 = product of:
      0.07114877 = sum of:
        0.07114877 = product of:
          0.14229754 = sum of:
            0.14229754 = weight(_text_:400 in 2091) [ClassicSimilarity], result of:
              0.14229754 = score(doc=2091,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.43455404 = fieldWeight in 2091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we define and present a comprehensive classification of user intent for Web searching. The classification consists of three hierarchical levels of informational, navigational, and transactional intent. After deriving attributes of each, we then developed a software application that automatically classified queries using a Web search engine log of over a million and a half queries submitted by several hundred thousand users. Our findings show that more than 80% of Web queries are informational in nature, with about 10% each being navigational and transactional. In order to validate the accuracy of our algorithm, we manually coded 400 queries and compared the results from this manual classification to the results determined by the automated method. This comparison showed that the automatic classification has an accuracy of 74%. Of the remaining 25% of the queries, the user intent is vague or multi-faceted, pointing to the need for probabilistic classification. We discuss how search engines can use knowledge of user intent to provide more targeted and relevant results in Web searching.
  13. Olsen, K.A.: ¬The Internet, the Web, and eBusiness : formalizing applications for the real world (2005) 0.04
    0.035438795 = product of:
      0.07087759 = sum of:
        0.07087759 = sum of:
          0.04743251 = weight(_text_:400 in 149) [ClassicSimilarity], result of:
            0.04743251 = score(doc=149,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.14485134 = fieldWeight in 149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.015625 = fieldNorm(doc=149)
          0.023445083 = weight(_text_:22 in 149) [ClassicSimilarity], result of:
            0.023445083 = score(doc=149,freq=6.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.1340265 = fieldWeight in 149, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=149)
      0.5 = coord(1/2)
    
    Classification
    004.678 22
    DDC
    004.678 22
    Footnote
    Chapter 12 on "Web Presence" is a useful discussion of what it means to have a Web site that is indexed by a spider from a major Web search engine. Chapter 13 on "Mobile Computing" is very well done and gives the reader a solid basis of what is involved with mobile computing without overwhelming them with technical details. Chapter 14 discusses the difference between pull technologies and push technologies using the Web that is understandable to almost anyone who has ever used the Web. Chapters 15, 16, and 17 are for the technically stout at heart; they cover "Dynamic Web Pages," " Embedded Scripts," and "Peer-to-Peer Computing." These three chapters will tend to dampen the spirits of anyone who does not come from a technical background. Chapter 18 on "Symbolic Services-Information Providers" and chapter 19 on "OnLine Symbolic Services-Case Studies" are ideal for class discussion and students assignments as is chapter 20, "Online Retail Shopping-Physical Items." Chapter 21 presents a number of case studies on the "Technical Constraints" discussed in chapter 3 and chapter 22 presents case studies on the "Cultural Constraints" discussed in chapter 4. These case studies are not only presented in an interesting manner they focus on situations that most Web users have encountered but never really given much thought to. Chapter 24 "A Better Model?" discusses a combined "formalized/unformalized" model that might make Web applications such as banking and booking travel work better than the current models. This chapter will cause readers to think about the role of formalization and the unformalized processes that are involved in any application. Chapters 24, 25, 26, and 27 which discuss the role of "Data Exchange," "Formalized Data Exchange," "Electronic Data Interchange-EDI," and "XML" in business-to-business applications on the Web may stress the limits of the nontechnically oriented reader even though it is presented in a very understandable manner. Chapters 28, 29, 30, and 31 discuss Web services, the automated value chain, electronic market places, and outsourcing, which are of high interest to business students, businessmen, and designers of Web applications and can be skimmed by others who want to understand ebusiness but are not interested in the details. In Part 5, the chapters 32, 33, and 34 on "Interfacing with the Web of the Future," "A Disruptive Technology," "Virtual Businesses," and "Semantic Web," were, for me, who teaches courses in IT and develops ebusiness applications the most interesting chapters in the book because they provided some useful insights about what is likely to happen in the future. The summary in part 6 of the book is quite well done and I wish I had read it before I started reading the other parts of the book.
    The book is quite large with over 400 pages and covers a myriad of topics, which is probably more than any one course could cover but an instructor could pick and choose those chapters most appropriate to the course content. The book could be used for multiple courses by selecting the relevant topics. I enjoyed the first person, rather down to earth, writing style and the number of examples and analogies that the author presented. I believe most people could relate to the examples and situations presented by the author. As a teacher in Information Technology, the discussion questions at the end of the chapters and the case studies are a valuable resource as are the end of chapter notes. I highly recommend this book for an introductory course that combines computing, networking, the Web, and ebusiness for Business and Social Science students as well as an introductory course for students in Information Science, Library Science, and Computer Science. Likewise, I believe IT managers and Web page designers could benefit from selected chapters in the book."
  14. Buzydlowski, J.W.; White, H.D.; Lin, X.: Term Co-occurrence Analysis as an Interface for Digital Libraries (2002) 0.04
    0.035167623 = product of:
      0.07033525 = sum of:
        0.07033525 = product of:
          0.1406705 = sum of:
            0.1406705 = weight(_text_:22 in 1339) [ClassicSimilarity], result of:
              0.1406705 = score(doc=1339,freq=6.0), product of:
                0.17492871 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049953517 = queryNorm
                0.804159 = fieldWeight in 1339, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1339)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:16:22
  15. Semantic role universals and argument linking : theoretical, typological, and psycholinguistic perspectives (2006) 0.03
    0.03353985 = product of:
      0.0670797 = sum of:
        0.0670797 = product of:
          0.1341594 = sum of:
            0.1341594 = weight(_text_:400 in 3670) [ClassicSimilarity], result of:
              0.1341594 = score(doc=3670,freq=4.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.40970147 = fieldWeight in 3670, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3670)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Classification
    ET 400
    RVK
    ET 400
  16. Hemminger, B.M.: Introduction to the special issue on bioinformatics (2005) 0.03
    0.033499952 = product of:
      0.066999905 = sum of:
        0.066999905 = product of:
          0.13399981 = sum of:
            0.13399981 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
              0.13399981 = score(doc=4189,freq=4.0), product of:
                0.17492871 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049953517 = queryNorm
                0.76602525 = fieldWeight in 4189, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4189)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 14:19:22
  17. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.03
    0.033058096 = product of:
      0.06611619 = sum of:
        0.06611619 = product of:
          0.19834857 = sum of:
            0.19834857 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
              0.19834857 = score(doc=692,freq=2.0), product of:
                0.42350647 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049953517 = queryNorm
                0.46834838 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
  18. Subject retrieval in a networked environment : Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC (2003) 0.03
    0.030484267 = product of:
      0.060968533 = sum of:
        0.060968533 = sum of:
          0.04743251 = weight(_text_:400 in 3964) [ClassicSimilarity], result of:
            0.04743251 = score(doc=3964,freq=2.0), product of:
              0.32745647 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.049953517 = queryNorm
              0.14485134 = fieldWeight in 3964, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.015625 = fieldNorm(doc=3964)
          0.013536024 = weight(_text_:22 in 3964) [ClassicSimilarity], result of:
            0.013536024 = score(doc=3964,freq=2.0), product of:
              0.17492871 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049953517 = queryNorm
              0.07738023 = fieldWeight in 3964, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=3964)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: Devadason, F.J., N. Intaraksa u. P. Patamawongjariya u.a.: Faceted indexing application for organizing and accessing internet resources; Nicholson, D., S. Wake: HILT: subject retrieval in a distributed environment; Olson, T.: Integrating LCSH and MeSH in information systems; Kuhr, P.S.: Putting the world back together: mapping multiple vocabularies into a single thesaurus; Freyre, E., M. Naudi: MACS : subject access across languages and networks; McIlwaine, I.C.: The UDC and the World Wide Web; Garrison, W.A.: The Colorado Digitization Project: subject access issues; Vizine-Goetz, D., R. Thompson: Towards DDC-classified displays of Netfirst search results: subject access issues; Godby, C.J., J. Stuler: The Library of Congress Classification as a knowledge base for automatic subject categorization: subject access issues; O'Neill, E.T., E. Childress u. R. Dean u.a.: FAST: faceted application of subject terminology; Bean, C.A., R. Green: Improving subject retrieval with frame representation; Zeng, M.L., Y. Chen: Features of an integrated thesaurus management and search system for the networked environment; Hudon, M.: Subject access to Web resources in education; Qin, J., J. Chen: A multi-layered, multi-dimensional representation of digital educational resources; Riesthuis, G.J.A.: Information languages and multilingual subject access; Geisselmann, F.: Access methods in a database of e-journals; Beghtol, C.: The Iter Bibliography: International standard subject access to medieval and renaissance materials (400-1700); Slavic, A.: General library classification in learning material metadata: the application in IMS/LOM and CDMES metadata schemas; Cordeiro, M.I.: From library authority control to network authoritative metadata sources; Koch, T., H. Neuroth u. M. Day: Renardus: Cross-browsing European subject gateways via a common classification system (DDC); Olson, H.A., D.B. Ward: Mundane standards, everyday technologies, equitable access; Burke, M.A.: Personal Construct Theory as a research tool in Library and Information Science: case study: development of a user-driven classification of photographs
    Footnote
    Rez. in: KO 31(2004) no.2, S.117-118 (D. Campbell): "This excellent volume offers 22 papers delivered at an IFLA Satellite meeting in Dublin Ohio in 2001. The conference gathered together information and computer scientists to discuss an important and difficult question: in what specific ways can the accumulated skills, theories and traditions of librarianship be mobilized to face the challenges of providing subject access to information in present and future networked information environments? The papers which grapple with this question are organized in a surprisingly deft and coherent way. Many conferences and proceedings have unhappy sessions that contain a hodge-podge of papers that didn't quite fit any other categories. As befits a good classificationist, editor I.C. McIlwaine has kept this problem to a minimum. The papers are organized into eight sessions, which split into two broad categories. The first five sessions deal with subject domains, and the last three deal with subject access tools. The five sessions and thirteen papers that discuss access in different domains appear in order of in creasing intension. The first papers deal with access in multilingual environments, followed by papers an access across multiple vocabularies and across sectors, ending up with studies of domain-specific retrieval (primarily education). Some of the papers offer predictably strong work by scholars engaged in ongoing, long-term research. Gerard Riesthuis offers a clear analysis of the complexities of negotiating non-identical thesauri, particularly in cases where hierarchical structure varies across different languages. Hope Olson and Dennis Ward use Olson's familiar and welcome method of using provocative and unconventional theory to generate meliorative approaches to blas in general subject access schemes. Many papers, an the other hand, deal with specific ongoing projects: Renardus, The High Level Thesaurus Project, The Colorado Digitization Project and The Iter Bibliography for medieval and Renaissance material. Most of these papers display a similar structure: an explanation of the theory and purpose of the project, an account of problems encountered in the implementation, and a discussion of the results, both promising and disappointing, thus far. Of these papers, the account of the Multilanguage Access to Subjects Project in Europe (MACS) deserves special mention. In describing how the project is founded an the principle of the equality of languages, with each subject heading language maintained in its own database, and with no single language used as a pivot for the others, Elisabeth Freyre and Max Naudi offer a particularly vivid example of the way the ethics of librarianship translate into pragmatic contexts and concrete procedures. The three sessions and nine papers devoted to subject access tools split into two kinds: papers that discuss the use of theory and research to generate new tools for a networked environment, and those that discuss the transformation of traditional subject access tools in this environment. In the new tool development area, Mary Burke provides a promising example of the bidirectional approach that is so often necessary: in her case study of user-driven classification of photographs, she user personal construct theory to clarify the practice of classification, while at the same time using practice to test the theory. Carol Bean and Rebecca Green offer an intriguing combination of librarianship and computer science, importing frame representation technique from artificial intelligence to standardize syntagmatic relationships to enhance recall and precision.
  19. Dworman, G.O.; Kimbrough, S.O.; Patch, C.: On pattern-directed search of arcives and collections (2000) 0.03
    0.029645318 = product of:
      0.059290636 = sum of:
        0.059290636 = product of:
          0.11858127 = sum of:
            0.11858127 = weight(_text_:400 in 4289) [ClassicSimilarity], result of:
              0.11858127 = score(doc=4289,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.36212835 = fieldWeight in 4289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4289)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article begins by presenting and discussing the distinction between record-oriented and pattern-oriented search. Examples or recordoriented (or item-oriented) questions include: "What (or how many, etc.) glass items made prior to 100 A.D. do we have in our collection?" and "How many paintings featuring dogs do we have that were painted during the 19th century, and who painted them?" Standard database systems are well suited to answering such questions, based on the data in, for example, a collections management system. Examples of pattern-oriented questions include: "How does the (apparent) productoin of glass objects vary over time between 400 B.C. and 100 A.D.?" and "What other animals are present in paintings with dogs (painted during the 19th century and in our collection)?" Standard database systems are not well suited to answering these sorts of questions, even though the basic data is properly stored in them. To answer pattern-oriented questions it is the accepted solution to transform the underlying (relational) data to what is called the data cube or cross tabulation form. We discuss how this can be done for non-numeric data, such as are found in museum collections and archives
  20. Qin, J.: Semantic similarities between a keyword database and a controlled vocabulary database : an investigation in the antibiotic resistance literature (2000) 0.03
    0.029645318 = product of:
      0.059290636 = sum of:
        0.059290636 = product of:
          0.11858127 = sum of:
            0.11858127 = weight(_text_:400 in 4386) [ClassicSimilarity], result of:
              0.11858127 = score(doc=4386,freq=2.0), product of:
                0.32745647 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.049953517 = queryNorm
                0.36212835 = fieldWeight in 4386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4386)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The 'KeyWords Plus' in the Science Citation Index database represents an approach to combining citation and semantic indexing in describing the document content. This paper explores the similariites or dissimilarities between citation-semantic and analytic indexing. The dataset consisted of over 400 matching records in the SCI and MEDLINE databases on antibiotic resistance in pneumonia. The degree of similarity in indexing terms was found to vary on a scale from completely different to completely identical with various levels in between. The within-document similarity in the 2 databases was measured by a variation on the Jaccard coefficient - the Inclusion Index. The average inclusion coefficient was 0,4134 for SCI and 0,3371 for Medline. The 20 terms occuring most frequently in each database were identified. The 2 groups of terms shared the same terms that consist of the 'intellectual base' for the subject. conceptual similarity was analyzed through scatterplots of matching and nonmatching terms vs. partially identical and broader/narrower terms. The study also found that both databases differed in assigning terms in various semantic categories. Implications of this research and further studies are suggested

Types

  • a 568
  • m 68
  • el 46
  • s 32
  • b 24
  • x 2
  • i 1
  • n 1
  • r 1
  • More… Less…

Themes

Subjects

Classifications