Search (3408 results, page 1 of 171)

  • × year_i:[2000 TO 2010}
  1. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.16
    0.16239655 = product of:
      0.3247931 = sum of:
        0.062004488 = product of:
          0.18601346 = sum of:
            0.18601346 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.18601346 = score(doc=2918,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.07677513 = sum of:
          0.044751476 = weight(_text_:theory in 2918) [ClassicSimilarity], result of:
            0.044751476 = score(doc=2918,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.27566507 = fieldWeight in 2918, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.046875 = fieldNorm(doc=2918)
          0.032023653 = weight(_text_:29 in 2918) [ClassicSimilarity], result of:
            0.032023653 = score(doc=2918,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.23319192 = fieldWeight in 2918, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=2918)
        0.18601346 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.18601346 = score(doc=2918,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.5 = coord(3/6)
    
    Abstract
    The employees of an organization often use a personal hierarchical classification scheme to organize digital documents that are stored on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk that the organization will lose track of needed documentation. Furthermore, the inherent boundaries of such a hierarchical structure require making arbitrary decisions about which specific criteria the classification will b.e based on (for instance, the administrative activity or the document type, although a document can have several attributes and require classification in several classes).A faceted classification model to support corporate information organization is proposed. Partially based on Ranganathan's facets theory, this model aims not only to standardize the organization of digital documents, but also to simplify the management of a document throughout its life cycle for both individuals and organizations, while ensuring compliance to regulatory and policy requirements.
    Date
    29. 8.2009 21:15:48
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.13
    0.13194287 = product of:
      0.26388574 = sum of:
        0.062004488 = product of:
          0.18601346 = sum of:
            0.18601346 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.18601346 = score(doc=562,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.18601346 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18601346 = score(doc=562,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.01586779 = product of:
          0.03173558 = sum of:
            0.03173558 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03173558 = score(doc=562,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.11
    0.11023021 = product of:
      0.33069062 = sum of:
        0.082672656 = product of:
          0.24801797 = sum of:
            0.24801797 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.24801797 = score(doc=140,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
        0.24801797 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.24801797 = score(doc=140,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.33333334 = coord(2/6)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  4. Wainer, H.: Picturing the uncertain world : how to understand, communicate, and control uncertainty through graphical display (2009) 0.11
    0.10907037 = product of:
      0.21814074 = sum of:
        0.021096049 = product of:
          0.042192098 = sum of:
            0.042192098 = weight(_text_:theory in 1451) [ClassicSimilarity], result of:
              0.042192098 = score(doc=1451,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.25989953 = fieldWeight in 1451, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1451)
          0.5 = coord(1/2)
        0.16915739 = weight(_text_:graphic in 1451) [ClassicSimilarity], result of:
          0.16915739 = score(doc=1451,freq=10.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.65436554 = fieldWeight in 1451, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03125 = fieldNorm(doc=1451)
        0.027887305 = product of:
          0.05577461 = sum of:
            0.05577461 = weight(_text_:methods in 1451) [ClassicSimilarity], result of:
              0.05577461 = score(doc=1451,freq=8.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.35535768 = fieldWeight in 1451, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1451)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    In his entertaining and informative book "Graphic Discovery", Howard Wainer unlocked the power of graphical display to make complex problems clear. Now he's back with Picturing the Uncertain World, a book that explores how graphs can serve as maps to guide us when the information we have is ambiguous or incomplete. Using a visually diverse sampling of graphical display, from heartrending autobiographical displays of genocide in the Kovno ghetto to the 'Pie Chart of Mystery' in a "New Yorker" cartoon, Wainer illustrates the many ways graphs can be used - and misused - as we try to make sense of an uncertain world. "Picturing the Uncertain World" takes readers on an extraordinary graphical adventure, revealing how the visual communication of data offers answers to vexing questions yet also highlights the measure of uncertainty in almost everything we do. Are cancer rates higher or lower in rural communities? How can you know how much money to sock away for retirement when you don't know when you'll die? And where exactly did nineteenth-century novelists get their ideas? These are some of the fascinating questions Wainer invites readers to consider. Along the way he traces the origins and development of graphical display, from William Playfair, who pioneered the use of graphs in the eighteenth century, to instances today where the public has been misled through poorly designed graphs. We live in a world full of uncertainty, yet it is within our grasp to take its measure. Read "Picturing the Uncertain World" and learn how.
    LCSH
    Uncertainty (Information theory) / Graphic methods
    Communication in science / Graphic methods
    Subject
    Uncertainty (Information theory) / Graphic methods
    Communication in science / Graphic methods
  5. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.10
    0.09645145 = product of:
      0.28935432 = sum of:
        0.07233858 = product of:
          0.21701573 = sum of:
            0.21701573 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.21701573 = score(doc=306,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.21701573 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.21701573 = score(doc=306,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.33333334 = coord(2/6)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  6. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.09
    0.09253231 = product of:
      0.18506461 = sum of:
        0.041336328 = product of:
          0.12400898 = sum of:
            0.12400898 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12400898 = score(doc=701,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12400898 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12400898 = score(doc=701,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.019719303 = product of:
          0.039438605 = sum of:
            0.039438605 = weight(_text_:methods in 701) [ClassicSimilarity], result of:
              0.039438605 = score(doc=701,freq=4.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.25127584 = fieldWeight in 701, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  7. Trentin, G.: Graphic tools for knowledge representation and informal problem-based learning in professional online communities (2007) 0.08
    0.08225169 = product of:
      0.16450338 = sum of:
        0.01334319 = product of:
          0.02668638 = sum of:
            0.02668638 = weight(_text_:29 in 1463) [ClassicSimilarity], result of:
              0.02668638 = score(doc=1463,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.19432661 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
        0.13373064 = weight(_text_:graphic in 1463) [ClassicSimilarity], result of:
          0.13373064 = score(doc=1463,freq=4.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.51732135 = fieldWeight in 1463, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1463)
        0.017429566 = product of:
          0.034859132 = sum of:
            0.034859132 = weight(_text_:methods in 1463) [ClassicSimilarity], result of:
              0.034859132 = score(doc=1463,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.22209854 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    The use of graphical representations is very common in information technology and engineering. Although these same tools could be applied effectively in other areas, they are not used because they are hardly known or are completely unheard of. This article aims to discuss the results of the experimentation carried out on graphical approaches to knowledge representation during research, analysis and problem-solving in the health care sector. The experimentation was carried out on conceptual mapping and Petri Nets, developed collaboratively online with the aid of the CMapTool and WoPeD graphic applications. Two distinct professional communities have been involved in the research, both pertaining to the Local Health Units in Tuscany. One community is made up of head physicians and health care managers whilst the other is formed by technical staff from the Department of Nutrition and Food Hygiene. It emerged from the experimentation that concept maps arc considered more effective in analyzing knowledge domain related to the problem to be faced (description of what it is). On the other hand, Petri Nets arc more effective in studying and formalizing its possible solutions (description of what to do to). For the same reason, those involved in the experimentation have proposed the complementary rather than alternative use of the two knowledge representation methods as a support for professional problem-solving.
    Date
    28. 2.2008 14:16:29
  8. Yukimo Kobashio, N.; Santos, R.N.M.: Information organization and representation by graphic devices : an interdisciplinary approach (2007) 0.07
    0.0719367 = product of:
      0.21581008 = sum of:
        0.02668638 = product of:
          0.05337276 = sum of:
            0.05337276 = weight(_text_:29 in 1101) [ClassicSimilarity], result of:
              0.05337276 = score(doc=1101,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.38865322 = fieldWeight in 1101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1101)
          0.5 = coord(1/2)
        0.18912369 = weight(_text_:graphic in 1101) [ClassicSimilarity], result of:
          0.18912369 = score(doc=1101,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.7316029 = fieldWeight in 1101, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.078125 = fieldNorm(doc=1101)
      0.33333334 = coord(2/6)
    
    Date
    29.12.2007 18:17:29
  9. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.07
    0.06889388 = product of:
      0.20668164 = sum of:
        0.05167041 = product of:
          0.15501122 = sum of:
            0.15501122 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.15501122 = score(doc=5895,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.15501122 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.15501122 = score(doc=5895,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.33333334 = coord(2/6)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  10. Rafferty, P.; Hidderley, R.: ¬A survey of Image trieval tools (2004) 0.06
    0.05882954 = product of:
      0.17648861 = sum of:
        0.016011827 = product of:
          0.032023653 = sum of:
            0.032023653 = weight(_text_:29 in 2670) [ClassicSimilarity], result of:
              0.032023653 = score(doc=2670,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23319192 = fieldWeight in 2670, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2670)
          0.5 = coord(1/2)
        0.16047677 = weight(_text_:graphic in 2670) [ClassicSimilarity], result of:
          0.16047677 = score(doc=2670,freq=4.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.62078565 = fieldWeight in 2670, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=2670)
      0.33333334 = coord(2/6)
    
    Abstract
    Issues regarding interpretation and the locus of meaning in the image sign (objectivist, constructionist or subjectivist) are clearly important in relation to reading images and are well documented in the literature (Svenonius, 1994; Shatford, 1984,1986; Layne, 1994; Enser, 1991, 1995; Rafferty Brown & Hidderley, 1996). The same issues of interpretation and reading pertain to image indexing tools which themselves are the result of choice, design and construction. Indexing becomes constrained and specific when a particular controlled vocabulary is adhered to. Indexing tools can often work better for one type of document than another. In this paper we discuss the different 'flavours' of three image retrieval tools: the Art and Architecture Thesaurus, Iconclass and the Library of Congress Thesaurus for Graphic Materials.
    Date
    29. 8.2004 19:07:01
    Object
    Thesaurus for Graphic Materials
  11. Warner, J.: Information and redundancy in the legend of Theseus (2003) 0.06
    0.055342413 = product of:
      0.16602723 = sum of:
        0.032296598 = product of:
          0.064593196 = sum of:
            0.064593196 = weight(_text_:theory in 4448) [ClassicSimilarity], result of:
              0.064593196 = score(doc=4448,freq=6.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.39788827 = fieldWeight in 4448, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4448)
          0.5 = coord(1/2)
        0.13373064 = weight(_text_:graphic in 4448) [ClassicSimilarity], result of:
          0.13373064 = score(doc=4448,freq=4.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.51732135 = fieldWeight in 4448, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4448)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper considers an instance of non-verbal graphic communication from the legend of Theseus, in terms of information theory. The efficient cause of a failure in communication is regarded as a selection error and the formal cause as the absence of redundancy from the signals (a binary contrast between a black and a white sail) for transmission. Two considerations are then introduced. First, why should such a system of signalling have been succeeded by a graphic communication system, in alphabetic written language, so strongly marked by its redundancy? Second, why has information theory been so successful in describing systems for signal transmission but far less productive for modelling human-to-human communication, at the level of meaning or of the effects of messages on recipients? The legend is read historically, adopting specific insights, a method of interpretation, and a historical schema from Vico. The binary code used for the signal transmission is located as a rare but significant transitional form, mediating between heroic emblems and written language. For alphabetic written language, a link to the sounds of oral utterance replaces the connection to the mental states of the human information source and destination. It is also suggested that redundancy was deliberately introduced to counteract the effects of selection errors and noise. With regard to information theory, it is suggested that conformity with necessary conditions for signal transmission, which may include the introduction of redundancy, cannot be expected to yield insights into communication, at the level of meaning or the effects of messages.
  12. Schneider, J.W.: Emerging frameworks and methods : The Fourth International Conference on Conceptions of Library and Information Science (CoLIS4), The Information School, University of Washington, Seattle, Washington, USA, July 21-25, 2002 (2002) 0.05
    0.05165657 = product of:
      0.1549697 = sum of:
        0.12708241 = sum of:
          0.084384196 = weight(_text_:theory in 1857) [ClassicSimilarity], result of:
            0.084384196 = score(doc=1857,freq=4.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.51979905 = fieldWeight in 1857, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.0625 = fieldNorm(doc=1857)
          0.04269821 = weight(_text_:29 in 1857) [ClassicSimilarity], result of:
            0.04269821 = score(doc=1857,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.31092256 = fieldWeight in 1857, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0625 = fieldNorm(doc=1857)
        0.027887305 = product of:
          0.05577461 = sum of:
            0.05577461 = weight(_text_:methods in 1857) [ClassicSimilarity], result of:
              0.05577461 = score(doc=1857,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.35535768 = fieldWeight in 1857, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1857)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Footnote
    Bericht über die Tagung und Kurzreferate zu den 18 Beiträgen (u.a. BELKIN, N.J.: A classification of interactions with information; INGWERSEN, P.: Cognitive perspectives of document representation; HJOERLAND, B.: Principia informatica: foundational theory of the concepts of information and principles of information services; TUOMINEN, K. u.a.: Discourse, cognition and reality: towards a social constructionist meta-theory for library and information science
    Source
    Knowledge organization. 29(2002) nos.3/4, S.231-234
  13. Gordon, A.S.: Browsing image collections with representations of common-sense activities (2001) 0.05
    0.050355688 = product of:
      0.15106706 = sum of:
        0.018680464 = product of:
          0.03736093 = sum of:
            0.03736093 = weight(_text_:29 in 6530) [ClassicSimilarity], result of:
              0.03736093 = score(doc=6530,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.27205724 = fieldWeight in 6530, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6530)
          0.5 = coord(1/2)
        0.1323866 = weight(_text_:graphic in 6530) [ClassicSimilarity], result of:
          0.1323866 = score(doc=6530,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.51212204 = fieldWeight in 6530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6530)
      0.33333334 = coord(2/6)
    
    Abstract
    To support browsing-based subject access to image collections, it is necessary to provide users with networks of subject terms that are organized in an intuitive, richly interconnected manner. A principled approach to this task is to organize the subject terms by their relationship to activity contexts that are commonly understood among users. This article describes a methodology for creating networks of subject terms by manually representing a large number of common-sense activities that are broadly related to image subject terms. The application of this methodology to the Library of Congress Thesaurus for Graphic Materials produced 768 representations that supported users of a prototype browsing-based retrieval system in searching large, indexed photograph collections
    Date
    29. 9.2001 18:43:45
  14. Herrero-Solana, V.; Moya Anegón, F. de: Graphical Table of Contents (GTOC) for library collections : the application of UDC codes for the subject maps (2003) 0.05
    0.047202423 = product of:
      0.14160727 = sum of:
        0.13102874 = weight(_text_:graphic in 2758) [ClassicSimilarity], result of:
          0.13102874 = score(doc=2758,freq=6.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.5068694 = fieldWeight in 2758, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03125 = fieldNorm(doc=2758)
        0.010578527 = product of:
          0.021157054 = sum of:
            0.021157054 = weight(_text_:22 in 2758) [ClassicSimilarity], result of:
              0.021157054 = score(doc=2758,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.15476047 = fieldWeight in 2758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2758)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The representation of information contents by graphical maps is an extended ongoing research topic. In this paper we introduce the application of UDC codes for the subject maps development. We use the following graphic representation methodologies: 1) Multidimensional scaling (MDS), 2) Cluster analysis, 3) Neural networks (Self Organizing Map - SOM). Finally, we conclude about the application viability of every kind of map. 1. Introduction Advanced techniques for Information Retrieval (IR) currently make up one of the most active areas for research in the field of library and information science. New models representing document content are replacing the classic systems in which the search terms supplied by the user were compared against the indexing terms existing in the inverted files of a database. One of the topics most often studied in the last years is bibliographic browsing, a good complement to querying strategies. Since the 80's, many authors have treated this topic. For example, Ellis establishes that browsing is based an three different types of tasks: identification, familiarization and differentiation (Ellis, 1989). On the other hand, Cove indicates three different browsing types: searching browsing, general purpose browsing and serendipity browsing (Cove, 1988). Marcia Bates presents six different types (Bates, 1989), although the classification of Bawden is the one that really interests us: 1) similarity comparison, 2) structure driven, 3) global vision (Bawden, 1993). The global vision browsing implies the use of graphic representations, which we will call map displays, that allow the user to get a global idea of the nature and structure of the information in the database. In the 90's, several authors worked an this research line, developing different types of maps. One of the most active was Xia Lin what introduced the concept of Graphical Table of Contents (GTOC), comparing the maps to true table of contents based an graphic representations (Lin 1996). Lin applies the algorithm SOM to his own personal bibliography, analyzed in function of the words of the title and abstract fields, and represented in a two-dimensional map (Lin 1997). Later on, Lin applied this type of maps to create websites GTOCs, through a Java application.
    Date
    12. 9.2004 14:31:22
  15. Jörgensen, C.: Image access : introduction and overview (2001) 0.05
    0.046396893 = product of:
      0.13919067 = sum of:
        0.063541204 = sum of:
          0.042192098 = weight(_text_:theory in 6527) [ClassicSimilarity], result of:
            0.042192098 = score(doc=6527,freq=4.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.25989953 = fieldWeight in 6527, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03125 = fieldNorm(doc=6527)
          0.021349104 = weight(_text_:29 in 6527) [ClassicSimilarity], result of:
            0.021349104 = score(doc=6527,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.15546128 = fieldWeight in 6527, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03125 = fieldNorm(doc=6527)
        0.07564948 = weight(_text_:graphic in 6527) [ClassicSimilarity], result of:
          0.07564948 = score(doc=6527,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.29264116 = fieldWeight in 6527, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03125 = fieldNorm(doc=6527)
      0.33333334 = coord(2/6)
    
    Abstract
    We are, it appears, on the hinge of an important historical swing back towards what may be called the primacy of the image. For the last few centuries, words have been the privileged form of communication and the preferred means of education. A shift has taken place, however, within the last several decades, and images have been reasserting their primacy as immediate and influential messengers. This change was heralded some years ago in a slim volume entitled, "The Telling Image: The Changing Balance between Pictures and Words in a Technological Age."' (Davies, Bathurst, & Bathurst). The author of this book describes a past in which images (e.g., pictograms, ideograms) were the only form of written communication for 25,000 out of the 30,000 years of human recorded experience. The invention of the phonetic alphabet began to change this. It is only during the last 500 years, with the invention of printing, that pictures as serious "messengers" receded well into the background. One reason for this was the sheer difficulty in producing images. However, with the widespread availability of easy-to-use image creation technologies, images are again being widely used in education, training, and persuasion, not to mention entertainment. The rise in image production and use has been accompanied by the theory of hemispheric lateralization (more popularly referred to as "left-brain/right brain" abilities), which arose during the last 40 years (Jaynes, 1976; Levy, 1974; Penfield & Roberts, 1959). This theory holds that functions of cognitive processing are located primarily in either the left or right hemispheres of the brain. The brain's left hemisphere seems to be linked to language processing, and is well exercised by the overall emphasis on speech and text in education and information systems. The brain's right hemisphere handles spatial reasoning, symbolic processing, and pictorial interpretation. The widespread use and acceptance of Graphic User Interfaces (GUIs) in computer systems and the development of iconic programming languages demonstrate that visual mechanisms appeal to a broader range of cognitive abilities than text alone. In a sense, then, images are the hinge between textual representation and direct experience.
    Date
    29. 9.2001 18:39:33
  16. Zhou, L.; Zhang, D.: NLPIR: a theoretical framework for applying Natural Language Processing to information retrieval (2003) 0.04
    0.044796567 = product of:
      0.1343897 = sum of:
        0.11347422 = weight(_text_:graphic in 5148) [ClassicSimilarity], result of:
          0.11347422 = score(doc=5148,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.43896174 = fieldWeight in 5148, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=5148)
        0.020915478 = product of:
          0.041830957 = sum of:
            0.041830957 = weight(_text_:methods in 5148) [ClassicSimilarity], result of:
              0.041830957 = score(doc=5148,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.26651827 = fieldWeight in 5148, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5148)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Zhou and Zhang believe that for the potential of natural language processing NLP to be reached in information retrieval a framework for guiding the effort should be in place. They provide a graphic model that identifies different levels of natural language processing effort during the query, document matching process. A direct matching approach uses little NLP, an expansion approach with thesauri, little more, but an extraction approach will often use a variety of NLP techniques, as well as statistical methods. A transformation approach which creates intermediate representations of documents and queries is a step higher in NLP usage, and a uniform approach, which relies on a body of knowledge beyond that of the documents and queries to provide inference and sense making prior to matching would require a maximum NPL effort.
  17. Galvez, C.; Moya-Anegón, F. de: ¬An evaluation of conflation accuracy using finite-state transducers (2006) 0.04
    0.044796567 = product of:
      0.1343897 = sum of:
        0.11347422 = weight(_text_:graphic in 5599) [ClassicSimilarity], result of:
          0.11347422 = score(doc=5599,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.43896174 = fieldWeight in 5599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=5599)
        0.020915478 = product of:
          0.041830957 = sum of:
            0.041830957 = weight(_text_:methods in 5599) [ClassicSimilarity], result of:
              0.041830957 = score(doc=5599,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.26651827 = fieldWeight in 5599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5599)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - To evaluate the accuracy of conflation methods based on finite-state transducers (FSTs). Design/methodology/approach - Incorrectly lemmatized and stemmed forms may lead to the retrieval of inappropriate documents. Experimental studies to date have focused on retrieval performance, but very few on conflation performance. The process of normalization we used involved a linguistic toolbox that allowed us to construct, through graphic interfaces, electronic dictionaries represented internally by FSTs. The lexical resources developed were applied to a Spanish test corpus for merging term variants in canonical lemmatized forms. Conflation performance was evaluated in terms of an adaptation of recall and precision measures, based on accuracy and coverage, not actual retrieval. The results were compared with those obtained using a Spanish version of the Porter algorithm. Findings - The conclusion is that the main strength of lemmatization is its accuracy, whereas its main limitation is the underanalysis of variant forms. Originality/value - The report outlines the potential of transducers in their application to normalization processes.
  18. Dillon, A.; Turnbull, D.: Information architecture (2009) 0.04
    0.044796567 = product of:
      0.1343897 = sum of:
        0.11347422 = weight(_text_:graphic in 3794) [ClassicSimilarity], result of:
          0.11347422 = score(doc=3794,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.43896174 = fieldWeight in 3794, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=3794)
        0.020915478 = product of:
          0.041830957 = sum of:
            0.041830957 = weight(_text_:methods in 3794) [ClassicSimilarity], result of:
              0.041830957 = score(doc=3794,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.26651827 = fieldWeight in 3794, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3794)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Information architecture has become one of the latest areas of excitement within the library and information science (LIS) community, largely resulting from the recognition it garners from those outside of the field for the methods and practices of information design and management long seen as core to information science. The term "information architecture" (IA) was coined by Richard Wurman in 1975 to describe the need to transform data into meaningful information for people to use, a not entirely original idea, but certainly a first-time conjunction of the terms into the now common IA label. Building on concepts in architecture, information design, typography, and graphic design, Wurman's vision of a new field lay dormant for the most part until the emergence of the World Wide Web in the 1990s, when interest in information organization and structures became widespread. The term came into vogue among the broad web design community as a result of the need to find a way of communicating shared interests in the underlying organization of digitally accessed information.
  19. Agosto, D.E.: Bounded rationality and satisficing in young people's Web-based decision making (2002) 0.04
    0.043114003 = product of:
      0.129342 = sum of:
        0.11347422 = weight(_text_:graphic in 177) [ClassicSimilarity], result of:
          0.11347422 = score(doc=177,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.43896174 = fieldWeight in 177, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=177)
        0.01586779 = product of:
          0.03173558 = sum of:
            0.03173558 = weight(_text_:22 in 177) [ClassicSimilarity], result of:
              0.03173558 = score(doc=177,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23214069 = fieldWeight in 177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=177)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This study investigated Simon's behavioral decisionmaking theories of bounded rationality and satisficing in relation to young people's decision making in the World Wide Web, and considered the role of personal preferences in Web-based decisions. It employed a qualitative research methodology involving group interviews with 22 adolescent females. Data analysis took the form of iterative pattern coding using QSR NUD*IST Vivo qualitative data analysis software. Data analysis revealed that the study participants did operate within the limits of bounded rationality. These limits took the form of time constraints, information overload, and physical constraints. Data analysis also uncovered two major satisficing behaviors-reduction and termination. Personal preference was found to play a major role in Web site evaluation in the areas of graphic/multimedia and subject content preferences. This study has related implications for Web site designers and for adult intermediaries who work with young people and the Web
  20. Saeed, K.; Dardzinska, A.: Natural language processing : word recognition without segmentation (2001) 0.04
    0.041359924 = product of:
      0.124079764 = sum of:
        0.089570984 = sum of:
          0.052210055 = weight(_text_:theory in 7707) [ClassicSimilarity], result of:
            0.052210055 = score(doc=7707,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.32160926 = fieldWeight in 7707, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7707)
          0.03736093 = weight(_text_:29 in 7707) [ClassicSimilarity], result of:
            0.03736093 = score(doc=7707,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.27205724 = fieldWeight in 7707, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7707)
        0.034508783 = product of:
          0.06901757 = sum of:
            0.06901757 = weight(_text_:methods in 7707) [ClassicSimilarity], result of:
              0.06901757 = score(doc=7707,freq=4.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.43973273 = fieldWeight in 7707, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7707)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    In an earlier article about the methods of recognition of machine and hand-written cursive letters, we presented a model showing the possibility of processing, classifying, and hence recognizing such scripts as images. The practical results we obtained encouraged us to extend the theory to an algorithm for word recognition. In this article, we introduce our ideas, describe our achievements, and present our results of testing words for recognition without segmentation. This would lead to the possibility of applying the methods used in this work, together with other previously developed algorithms to process whole sentences and, hence, written and spoken texts with the goal of automatic recognition.
    Date
    16.12.2001 18:29:38

Authors

Languages

Types

  • a 2885
  • m 368
  • el 170
  • s 119
  • b 31
  • x 28
  • i 16
  • r 7
  • n 4
  • p 1
  • More… Less…

Themes

Subjects

Classifications