LivingKnowledge goal is to bring a new quality into search and knowledge management technology for more concise, complete and contextualised search results.
Posted February 15, 2010
The modern work on context dates back to around 1980 when various philosophers of language and of science (in particular David Kaplan, David Lewis and Yehoshua Bar-Hillel) wrote some foundational papers on this problem. The issue dealt with was how to take into account the fact that a linguistic expression (e.g., “I am here now”) has a different meaning depending on some contextual factors (e.g., the time and space when a sentence is expressed). The problem, which was dealt with in this work, was that there are sentences like the one above which are always true (now we would say that they are true in any context) despite the fact that they are not analytical. This work triggered a lot of further work in the semantics of natural language and in formal logics (e.g., the work by Bar-Hillel, Andrea Bonomi and Paolo Casalegno) which tried to identify a set of indexes (very similar to what we call the “dimensions of diversity” in this proposal) whose values could affect the truth values of sentences.
Later on, in the middle and late 80’s this issue was taken on in Cognitive Science by G. Fauconnier with his work and book on “Mental Spaces” and various others building on his work, including John Dinsmore’s work on “Mental Spaces from a functional perspective”. The main goal of Fauconnier’s work was to develop a powerful theory of human knowledge representation and linguistic processing which could handle a variety of problems in linguistics and the philosophy of language in a simple, uniform and intuitively plausible way. Dinsmore further elaborated Fauconnier’s results and tried to provide an answer to the question of what is the general structure and role of mental spaces in cognition. Dinsmore argued that Mental spaces are a means for organizing knowledge in support of a general inference method called “simulative reasoning”.
In Computer Science the role of context and its impact on Knowledge Representation and management was independently studied in the early 90’s by Fausto Giunchiglia (see, e.g., [Giunchiglia 1993](1), [Ghidini and Giunchiglia 2001](2)) and John McCarthy (see, e.g., [McCarthy 1993](3)) and somewhat refined in [Benerecetti et al. 2001](4). Guha implemented a context mechanism as part of the Cyc system [Lenat and Guha 1989](5), a very large commonsense knowledge base including millions of facts. Various mechanisms for partitioning data and knowledge bases were later developed which were similar in spirit, but with smaller size, to the Cyc approach. The notion of localisation, similar to the notion of context, as described in [Giunchiglia 1993](1), has been exploited as a way to formalize the interaction among multiple peer-to-peer databases [Bernstein et al. 2002](6).
As far as we know the problem of diversity in data and knowledge representation has never been dealt with. The same applies to the problem of how context can be used to represent diversity. The only preliminary ideas about the foundations of this topic have been presented in the invited talk [Giunchiglia 2006](7). [Giunchiglia 2006](7) discusses very preliminarily how context can be used to represent diversity. The use of context and diversity proposed in this project implements and develops the ideas described in [Giunchiglia 2006](7). As far as we know this approach is totally novel and has never been taken before. The same applies to the approach taken in this project where opinions, bias, and their evolution in time are studied as different manifestations of diversity.
(1) F. Giunchiglia. Contextual reasoning. Epistemologia, 16, 1993. [↩] [↩] (2) C. Ghidini and F. Giunchiglia. Local Models Semantics, or contextual reasoning=locality+compatibility. Artif. Intell. 127(2), 221-259, 2001. [↩] (3) J. McCarthy. Notes on formalizing contexts. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 555–560, 1993. [↩] (4) M. Benerecetti, P. Bouquet and C. Ghidini. On the dimensions of context dependence: partiality, approximation, and perspective. Proceedings of the Third International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT’01). Springer Verlag, Lecture Notes in AI Volume 2116, July 2001. [↩] (5) D.B. Lenat and R.V. Guha. Building large knowledge-based systems – Representation and inference in the Cyc Project. Addison Wesley, 1989. [↩] (6) P. Bernstein, F. Giunchiglia, A. Kementsietsidis, J. Mylopoulos, L. Serafini and I. Zaihrayeu. Data Management for Peer-to-Peer Computing: A Vision. Proceedings 5th WebDB Workshop, Madison, Wisconsin, USA, pp. 89-94, 2002. [↩] (7) F. Giunchiglia. Managing Diversity in Knowledge. Keynote talk, European Conference on Artificial Intelligence (ECAI-06), 2006. [↩] [↩] [↩]
The usual tools for resource discovery are based on classic ‘browse’ and ’search’ methods. Such systems throw up terms pulling them out of their context and hence seldom achieve user satisfaction [Levene 2006](1).
One main problem with present search engines is they are uniterm based and the end user may use boolean operators combining the words/phrases that occur in a web page - called post-coordinate indexing in library science parlance. Library and Information Science (LIS) researchers have proposed context-sensitive indexing methods like POPSI, [Bhattacharyya 1981](2) which basically perform pre-coordinate indexing (words along with their semantic context are represented in the index). However, to achieve precision in retrieval, the crux of the matter is the way information or knowledge is represented and the way it is organized.
A representation based on faceted ontologies of information in a generic form would lead to semantic retrieval. The knowledge structure to be deployed for the purpose corresponds to a semantic framework that is really a generalisation or abstraction of formal representation of domains [Prasad and Guha 2007](3). Each domain is envisaged as consisting of divisions or pieces of knowledge called facets where a facet is a distinctive division of the domain or subject that is conceptualized. Each facet in turn contains a set of concepts of the domain in a hierarchy and many such facets together comprise a subject domain. This model is based on Ranganathan’s theory of facetisation [Ranganathan 1967](4). The generic manifestations of such subject representations lead to faceted ontologies.
A formalized hierarchical structure of concepts forms a facet. All such facets are assembled together as required to describe information in each scenario. The facets are recognized as entities, action and properties of the entities.
While the facets themselves are distinct divisions of the domain and contain the concepts belonging to the facet within them, there are rules to generate surface strings for any given term that bring along with them the context. These are formalized strings that represent concepts in their entire context generated for tracing a term or its variant form occurring in various domain/s, by using certain rules in the system about how each term is related to the other and in turn the relations between the resources. [Prasad and Madalli 2008](5).
Classically, libraries had systems that processed subjects or domains and built representations such as subject indices. These had enough contextual information in the method of facetisation and synthesis so that they formed a semantic formalisation of the domain scope of the library collections. A faceted ontology model can be regarded as an emulation of this function that caters to what is known as the subject approach to information. Metadata alone will not suffice to achieve precision in retrieval as it only provides a description of a resource itself. It is the semantic system, with faceted knowledge representation, that can represent the content of the resource in its entire scope so that users can enter the system at the point they want to explore and retrieve information in the required form.
(1) M. Levene. An introduction to search engines and web navigation. Addison Wesley, New York, 2006. [↩] (2) G. Bhattacharyya. Subject Indexing Language: its theory and Practice. Proceedings of the DRTC Refresher Seminar – 13, New Developments in LIS in India, DRTC, 1981. [↩] (3) Prasad, ARD and N. Guha. Expressing Faceted Subject indexing in SKOS/RDF. In International Conference of Semantic Web and Digital Libraries, Bangalore 21-23, February, 2007. [↩] (4) S. R. Ranganathan. Prolegomena to Library Classification (Classic). (Ranganathan Series in Library Science, 20), Asia Publishing House, London, 1967. [↩] (5) Prasad, ARD and D. Madalli. Faceted Infrastructure for Semantic Digital Libraries. Library Review. Forth coming. [↩]
The idea of analyzing websites as multimodal texts made up of visual, linguistic and spatial resources and of developing searchable corpora of multimodal texts assumes a potential contribution of multimodal analysis to the current project which needs to be contextualized in relation to current thinking vis-à-vis social semiotics [Halliday 1978](1). This thinking views the meaning-making process, and hence theories of bias and diversity, with reference to specific societies and cultures as well as society and culture in general.
Social semiotics’ success can be measured in terms of its characterisation of the evolution of society, cultures, media and texts as interdependent processes in a word its disciplined study of context. Fittingly, social semiotics is itself currently undergoing a process of change so as to meet the new challenges of new media and is thus extending beyond its linguistic origins to account for the growing importance inter alia of sound and visual images, and to explain how and why new modes of communication come to be combined in both traditional and digital media [Kress and van Leeuwen 1996](2).
In the last 10-15 years, multimodalists such as Baldry and Thibault [Baldry and Thibault 2006a(3), Lemke 1998(4), Lemke 2000(5), O’Halloran 2001(6)] and many others have built on Halliday’s framework by providing new ‘grammars’ for semiotic modes other than language. Like language, these grammars are seen as socially formed and changeable sets of available ‘resources’ for making meaning shaped by the semiotic metafunctions identified by Halliday [Halliday 1978](1). The approach easily takes text, context and genre evolution into account – why, for example, in Western societies the newspaper genre, the ecology genre and the political economy genre, while maintaining certain features, have changed their social functions and textual identities – in the last 10, 20, 50 and 100 years [Baldry 2005](7).
Most multimodal researchers believe that further progress in the multimodal analysis of texts and context depends crucially, both from a theoretical and an applicative standpoint, on collaboration with computer scientists as a two-way process: multimodalists’ work of characterising the nature of texts (taken as units of meaning such as websites) and their relationships with society in the light of technological changes will need to be seen from the standpoint of mediation, i.e., providing a way in which specialists from different sectors are able to “talk to each other” within a framework which increasingly supports the automated analysis of multimodal/multimedia texts and which provide answers to such questions as how a multimodal text’s organisation as text (the textual metafunction) can be used as the basis for bias and diversity detection and retrieval.
In order to make theoretical insights easily reusable by other researchers, a two-prong approach has been developed concerned, on one hand, with construction of a scalar model of multimodal text analysis based on rigorous multimodal transcription [Baldry and Thibault 2006a](3) and, on the other, with the development of multimodal corpus linguistics [Baldry 2005(7), Baldry 2007(8); Baldry and Thibault 2006b(9)]. This framework may be taken as a common metalanguage, a way of standardising how we speak about the familiar – and less familiar – objects on a web page. A scalar-based multimodal model of websites responds to this requirement by clarifying the relationship between specific (one-off) instances (= individual texts) and recurrent types and patterns (= genres of various types) that are now in rapid evolution in websites as a result of society’s needs and the cultural pressures that these bring to bear. In the development from Web 1 (communication) to Web 2 (collaboration/interaction) and now Web 3 tools (customisation; customizable environments), new genres (mini-genres; micro-genres macro-genres) have developed with such rapidity that the naming and labeling process lags behind.
Multimodal grammar allows predictive statements to be made about the organisation of web pages and websites in complex ways that goes way beyond a naïve layman’s view of the web page. The grammar revisits the web page in terms of both static and dynamic participants, processes and circumstances and thus attempts to explain how visual processes such as merging, shading are linked up to typological/classificatory linguistic processes. Multimodality with its integration of actional and semiotic processes (through which meaning is made and recorded) is in essence an interdisciplinary subject with ramifications in logistics, medicine, architecture, studies of paintings, linguistics, cognitive and social semiotics and so on.
(1) M. A. K. Halliday. Language as social semiotic: The social interpretation of language and meaning. Maryland. University Park Press, 1978. [↩] [↩] (2) G. Kress and T. van Leeuwen. Reading Images: The Grammar of Visual Design. London: Routledge, 1996. [↩] (3) A.P. Baldry and P. J. Thibault. Multimodal Transcription and Text Analysis. London and New York; Equinox, 2006. [↩] [↩] (4) J. L. Lemke. Metamedia Literacy: Transforming Meanings And Media. In D. Reinking, L. Labbo, M. McKenna and R. Kiefer (Eds.) Handbook of Literacy and Technology: Transformations in a Post-Typographic World. Hillsdale, NJ: Erlbaum. pp.283-301, 1998. [↩] (5) J. L. Lemke. Across the Scales of Time: Artifacts, Activities, and Meanings in Ecosocial Systems. Mind, Culture, and Activity 7 (4), 273-290, 2000. [↩] (6) O’Halloran. On the effectiveness of mathematics. In Eija Ventola, Cassily Charles and Martin Kaltenbacher (eds.) Perspectives on Multimodality. Amesterdam: Benjamins, 91-117, 2001. [↩] (7) A.P. Baldry. A Multimodal Approach to Text Studies in English: The role of MCA in multimodal concordancing and multimodal corpus linguistics. Campobasso: Palladino, 2005. [↩] [↩] (8) A.P. Baldry. The role of multimodal concordancers in multimodal corpus linguistics. In Terry Royce & Wendy Bowcher (eds.), New Directions In The Analysis Of Multimodal Discourse. New Jersey: Erlbaum, 173-93, 2007. [↩] (9) A.P. Baldry and P. J. Thibault. Multimodal corpus linguistics. In Geoff Thompson & Susan Hunston (eds.), System and Corpus: Exploring connections. London: Equinox, 164-83, 2006. [↩]
Bias detection has been a key aim in the realm of political science, where the functioning of democracy requires that citizens be informed of biased representations of situations. This has long been an important research area. The intrusion of information and communications technology into decision-making forums has often been argued to be prejudicial to unbiased reporting [Lassiter 1997](1).
Many political studies of bias take a balanced presentation (i.e., 50-50) as a baseline, and extrapolate that giving greater airtime to one view rather than another constitutes bias; outright lying is rare, while disagreement over values is almost universal between politically sophisticated beings [Hofstetter and Buss 1978](2). However, this is a crude measure, as it can fail to control for non-partisan discussion or non-ideological structuring material. Controlling for such structural items can still reveal bias, as argued for example by [Schiffer 2006](3) who found a small pro-Democrat slant in US TV news bulletins.
It has been argued that “investigator bias”, i.e., a tendency among bias investigators as a result of their training to look for and find bias more often than it actually occurs, needs to be guarded against (e.g., Meissner and Kassin [Meissner and Kassin 2002](4)). Social psychologists have argued that social reality has different forms, and one’s social and political vantage point in relation to a specific context is almost bound to lead one to be biased, or to discover bias in others [Haslam et al. 2004](5). While political psychology shows that people are often able to assimilate new information in an efficient and unbiased manner — that is, they update prior beliefs in accordance with Bayes’ rule - and there is limited support for at least the more extreme theories that people are selective in the way they gather new information [Gerber and Green 1999](6), political judgements are often motivated, affected by goals, expectancies, prior judgements and even simple likes and dislikes [Taber et al. 2001](7). It has also been shown that detecting deception mediated via electronic media is extremely hard, whether or not the subject has been warned to expect it, and consumers of information delivered via electronic media remain extremely vulnerable [George et al. 2004](8).
Discovering bias in multimedia is a particularly hard problem as the viewer brings a point of view to images, video or non-verbal audio. The salience of a particular aspect of a scene can vary significantly, depending on the context of the image, or the surrounding text. For instance, an image of a heavily-armed policeman is likely to be thought significant in most contexts, but not if the image is of, say, an airport where armed policemen are routine. The psychology of bias in relation to multimedia will be very important, for example looking at the ability of images to reinforce or counter bias; it has been shown, for instance, that multimedia presentations are more effective in countering bias than simply text [Lim et al. 2000](9).
Standard approaches to multimedia content analysis will of course be important. One example is work to understand and cross the ‘semantic gap’ (including the use of Semantic Web-style technologies such as ontologies to provide some top-down understanding of content – [Hare et al. 2006b](10)). Another potential area of importance is work on the personalisation of multimedia retrieval [Sebe and Tan 2007](11), where the aim is to retrieve multimedia on the basis of user profiling and adapting system parameters to user interests. The concept of ‘user interest’ could be adapted as a way of expressing user-independent bias. Finally, there are other important techniques such as summarisation, skimming and browsing [Bailer and Thalinger 2007](12).
Multimedia can also improve the comprehensibility of information, and thereby make its presentation more effective [Lim and Benbasat 2002](13). The field of education research may also be important here, if we drop the assumption that a multimedia presentation of some biased point of view is static. If a multimedia presentation is interactive, then it has long been known in educational psychology that such a presentation can succeed in implanting bias into its readers [Strecher et al. 1999(14), Moore 2000(15)]. Detection of bias in interactive presentations presents many more obstacles to success. Bias can also be part of the context – for instance, if the interaction is with a group rather than an individual, a cultural bias can be brought into the episode by the group itself [Davenport 1998](16). The context of a document is also of importance. Network analysis can also show which sources are referred to (which of course may allow inferences to be made about bias [Adamic and Glance 2005](17). Ontologies, folksonomies and tags associated with a particular document may be of value for understanding underlying biases (particularly tags on Flickr resources).
For humans at least, bias detection has often been found easier with rich multimedia presentations, as has been shown in various human factors experiments (e.g., Riegelsberger et al. [Riegelsberger et al. 2005](18)), but the experiments performed provide little guidance as to how such detection could be automated. Again the field of education studies could be of value here. Annotating video with audio, particularly where assumptions can be made about the format/genre, as with news reporting, can use such methods as speech recognition, segmentation and clustering methods and acoustic modelling [Chen et al. 2002](19). Changes in speaker, detectable using relatively coarse speech analysis techniques [Liu and Kubala 1999](20), may allow more efficient segmentation, essential for the detection of bias within a larger sequence of coverage (which may include, e.g., interviews with a series of opposing points of view), while emotion detection is also important [Lee and Narayanan 2005(21), Shafran and Mohri 2005(22)]. With rich media presentations of the news, various types of content can aid bias detection. Vital sources include: text surrounding images; captions; scrolling text news in news programmes; subtitles; and these can require a wide range of disciplines to address them, including NLP, speech recognition, signal processing and artificial intelligence. Linguistically-motivated annotation, coupled with domain-specific information, will generally be the best source of knowledge available [Declerck et al. 2001](23).
(1) C. Lassiter. Cameras and the Infusion of Political Bias Into the Courtroom. International Journal of Law and Information Technology, 5(1), 28-82, 1997. [↩] (2) C. R. Hofstetter and T. F. Buss. Bias in Television News Coverage of Political Events: A Methodological Analysis. Journal of Broadcasting & Electronic Media, 22, 517, 1978. [↩] (4) C. A. Meissner and S. M. Kassin. He’s guilty!. Investigator Bias in Judgments of Truth and Deception’, Law and Human Behavior, 26(5), 469-480, 2002. [↩] (5) A. Haslam, T. Postmes and J. Jetten. Beyond balance: To understand “bias,” social psychology needs to address issues of politics, power, and social perspective. Behavioral and Brain Sciences, 27, 341-342, 2004. [↩] (6) A. Gerber and D. Green. Misperceptions About Perceptual Bias. Annual Review of Political Science, 2, 189-210, 1999. [↩] (7) C. S. Taber, M. Lodge and J. Glathar. The Motivated Construction of Political Judgements. In James H. Kuklinski (ed.) Citizens and Politics: Perspectives From Political Psychology, Cambridge: Cambridge University Press, 198-242, 2001. [↩] (8) J.F. George, K. Marett and P. Tilley. Deception detection under varying electronic media and warning conditions. In Proceedings of the 37th Annual Hawaii International Conference on System Sciences, 9, 2004. [↩] (9) K.H. Lim, I. Benbasat and L.M. Ward. The role of multimedia in changing first impression bias. Information Systems Research 11 (2), pp. 115–136, 2000. [↩] (10) J.S, Hare, P.A.S., Sinclair, P.H. Lewis, K. Martinez, P.G.B. Enser and C.J. Sandom. Bridging the Semantic Gap in Multimedia Information Retrieval: Top-down and Bottom-up approaches. In Mastering the Gap: From Information Extraction to Semantic Representation Proceedings of the 3rd European Semantic Web Conference, Budva, Montenegro, June 2006. [↩] (11) N. Sebe and Q. Tan. Personalized multimedia retrieval: the new trend?. Proceedings of the international workshop on Workshop on multimedia information retrieval, 299-306, 2007. [↩] (12) W. Bailer and G. Thalinger. A framework for multimedia content abstraction and its application to rushes exploration. Proceedings of the 6th ACM international conference on Image and video retrieval, 146-153, 2007. [↩] (13)K. H. Lim and I. Benbasat. The Influence of Multimedia on Improving the Comprehension of Organizational Information. Journal of Management Information Systems, 19(1), 99-127, 2002. [↩] (14) V. J. Strecher, T. Greenwood, C. Wang and D. Dumont. Interactive Multimedia and Risk Communication. Journal of the National Cancer Institute Monographs, 25, 134-139, 1999. [↩] (15) D. Moore. A Framework for Using Multimedia Within Argumentation Systems. Journal of Computers in Mathematics and Science Teaching, 19 (2), 83-98, 2000. [↩] (16) G. Davenport. Curious learning, cultural bias, and the learning curve. IEEE Multimedia, 5(2), 14-19, 1998. [↩] (17) L. A. Adamic and N. Glance. The Political Blogosphere and the 2004 U.S. Election: Divided They Blog. Proceedings of the 2nd Annual Workshop on the Weblogging Ecosystem: Aggregation, Analysis and Dynamics, World Wide Web Conference 2005, Japan. [↩] (18) J. Riegelsberger, M. Angela Sasse and J.D. McCarthy. Do people trust their eyes more than ears?: media bias in detecting cues of expertise. CHI ‘05 extended abstracts on Human factors in computing systems, 1745-1748, 2005. [↩] (19) S. S. Chen, E. Eide, M. J. F. Gales, R. A. Gopinath, D. Kanvesky and P. Olsen. Automatic transcription of Broadcast News. Speech Communication 37(1/2), 69-87, 2002. [↩] (20) D. Liu and F. Kubala. Fast Speaker Change Detection for Broadcast News Transcription and Indexing. Sixth European Conference on Speech Communication and Technology (EUROSPEECH’99), 1031-1034, 1999. [↩] (21) C. Min Lee and S.S. Narayanan. Toward detecting emotions in spoken dialogs. IEEE Transactions on Speech and Audio Processing, 13(2), 293-303, 2005. [↩] (22) I. Shafran and M. Mohri. A Comparison of Classifiers for Detecting Emotion from Speech. Proceedings IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005 (ICASSP '05), 341-344, 2005. [↩] (23) T. Declerck, P. Wittenburg and H. Cunningham. The automatic generation of formal annotations in a multimedia indexing and searching environment. Proceedings of the workshop on Human Language Technology and Knowledge Management, Toulouse, 1-8, 2001. [↩]
Campaigns are prototypical scenarios for providing information in a specific context / with specific bias. A politician tries to win the voting choice and a company tries to win the sales choice. Both try to enforce their interests with campaigns on the media landscape. Print, TV, radio and online media build a landscape where actors fight for the favourable position in these communication channels. Journalists read what other journalists write and often refer to the same information sources. Therefore campaigns develop a dynamic over different media channels, connecting many different articles and entries in a complex network.
The fight for the preferred position is a fight for image. Each sentence and even each single word contains a certain connotation with a value tendency [Pannagl 2006](1). This value can strengthen or lighten the support for a communication position. Language connects our ideas with our association of the real world and how it should be. In each campaign the actors try to give communication messages a spin in a supporting direction. Dirty campaigns try to give the message of the competitors a spin the other way round into a debilitating direction. The media multiply these communication messages and transport the images to the potential voters and customers. Media campaigns are the art of war on the media channels and therefore the founding of campaign science dates back into war history [von Clausewitz 2007](2).
The created images build a parallel world, which does not have to be identical with the real world. There does not have to be a true consistency between the communication message and the real intention or facts. As a result, authenticity and credibility are the most important values in campaigns [Faucheux 2003](3). If there is a consistency of what is said and done the success of communication and campaign tends to be higher.
Be it in politics or be it in business, each campaign competes for communication images - pictures in the heads of the potential voters or customers which they are asked to believe. This is a battle for language and the media is the landscape where this takes place. Both parties and companies assign researchers to analyse their media performance and to give strategic consulting. Through manual content analysis science can analyse the communication messages and the bias which creates the tendency to support or undermine a position or image – an important role in most campaigns. The Chinese Philosopher Sun Zsu developed a schedule of five tasks for the right campaign [Sun Tsu 1971](4). Four of these tasks are connected with measurement and science, only the last task is action.
Automatic methods to analyse communication messages and bias tendencies digitally would support campaign and marketing officers in an ideal way and spread the use of content analysis and these tools for shaping communication work. Campaigners need such tools and there is a significant market.
(1)H. Pannagl. Fisherman`s Friends vs. Ferrero Kisses. A political content analysis of the Austrian presidential election 2004. Master thesis, University of Vienna, 2006. [↩] (2)C. von Clausewitz. On war. Oxford University Press, Oxford, 2007. [↩] (3)R. A. Faucheux. Winning. Elections. Political Campaign Management, Strategy and Tactics. National book network, United States, 2003. [↩] (4)S. Tsu. The art of war. Oxford University Press, Oxford, 1971. [↩]
The first two will provide the foundations for the study and use of context in the representation of knowledge. Library science will provide the foundations and experience needed to organize information in categories and to realize innovative mechanisms for indexing hierarchical categorisation schemes with meaningful concept sequences, i.e., Facets. Semiotics will allow for the definition of modal approaches to the discovery of knowledge diversity and of how web components and multimedia data can be used to express opinions and bias. The Science of Campaigns will transport the planned success in content analysis into the art of communicating the message in the web and media landscape, where images in politics and the economy fight for their preferred position, and connect the developed tool with every day communication practice. Furthermore, the development of a system able to make computational sense of the theories developed in these interdisciplinary areas will require the integration of different sub-disciplines of Computer Science which are very rarely integrated. For instance, the implementation and use of context as the main means to represent diversity and the implementation and deployment of facets will require the integration of competences from Machine Learning, Information Retrieval (IR) and Knowledge Representation. For example, in traditional IR, a retrieval decision is generally made on the basis of the query and document collection, ignoring the user and the search context [Shen et al. 2005](1). But information about the context can be gleaned from a number of areas of computing, including cognitive science accounts of the cognitive viewpoint, human-computer interaction, and natural language processing and knowledge acquisition, amongst others [Ingwersen and Jarvelin 2005](2). Similarly, competences and technologies in pattern recognition and computational linguistics will need to be integrated in order to extract and manage opinions, bias and diversity based on an effective semiotic approach to text combined with images. It must also not be overlooked that context itself is not predefined or fixed, and has a dynamic of its own [Coutaz et al. 2005](3). Context must be discovered and re-negotiated constantly as part of a process of interaction with distributed, multiscale and reconfigurable resources.
(1) X. Shen, B. Tan and C. Zhai. Context-sensitive information retrieval using implicit feedback. Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, 43-50, 2005. [↩] (2) P. Ingwersen and K. Jarvelin. The Turn: Integration of Information Seeking and Retrieval in Context. Springer, 2005. [↩] (3) J. Coutaz, J.L. Crowley, S. Dobson and D. Garlan. Context is key. Communications of the ACM 48(3), 49-53, 2005. [↩]