is a research-based collective working at the intersection of art and technology. Through their artistic practice, rawlab aims to provoke contemplation on the complexities of language, perception, and the evolving relationship between humans and technology. By shedding light on the limitations, vulnerabilities, and societal implications of new technologies, media and artificial intelligence, rawlab examines the social effects of our networked cultures.
The visual language of an image allows for multiple perspectives and interpretations. Context is crucial in determining the meaning of any vi- sual material that we encounter. As humans, we always seek additional information in order to build and enrich a narrative.
When context is missing, obscured, lost, or forgotten, the meaning becomes distorted and difficult to determine. In machine learning, the process of extracting “regions of interest” is a mechanism used for se- lecting data arrays, in which only a small portion of the input material is analysed in detail.
Manufactured truth and other myths about human annotations frames the underlying processes in the development of machine vision. It raises the question of the semantic framework and the importance of context when it comes to reading images. The artistic duo reconstructs the intentionally omitted context of the data, exposing the falsely methods that are typical of machine vision development.
Metaphor is for most people a device of the poetic imagination, but the ordinary conceptual system, in terms of which we both think and communicate, is fundamentally metaphorical in nature. With regard to artificial intelligence, the question arises as to whether it can eventually come to comprehend figurative language in a consistent and non-literal manner.
Decoding Ambiguity traces the semantic limits inherent in machine learning’s paradigm. The artists make use of natural language processing to produce a sequence of generated poems, which are in turn provided as instructions for a text-to-video generative model.
The vulnerabilities of computation provoke a reflective examination on the complexity of language, perception, and the (in)ability of artificial intelligence to interpret complex semantic structures.
The installation invites visitors to enter an immersive space where a series of visual experiments reveal hidden patterns and complex relations, emerging from language and image interplay.
Тhe stock industry creates the mass of commercial imagery used in advertising and publishing across a range of media, as well as controls key historical and photo-journalistic archives. As a cultural practice, its procedures and moreover its core product - the ‘generic image’ demonstrate mechanisms of standardization, commodification, alienation, illusion and stereotypical classification. They affirm the utopian dynamic at the heart of capitalist modes of cultural production.
The Supreme Generic is a translation of language to image using a neural network trained to generate images from text descriptions. The input data consists of image captions collected from stock photography websites. We deconstructed this visual language to a set of keywords found in stock image captions & meta-data, subsequently instructing a machine learning model to generate yet another interpretation of this Society of Spectacle, meanwhile freeing it from its banality. Despite the painful familiarity, the visual outcome invites us to look at a dreamlike distortion of globalistic virtues.
Thinking about the stock image beyond its traditional critique opens up new directions for assessing the social value of such degraded forms of cultural production.
The collective visual archive on the Internet has become part of widely used datasets. The visual artefacts we leave online are collected by scientist and used to rationalise, replicate and automate cognitive processes such as human vision.
By fragmenting our perception into pre-defined parameters an ‘uncanny universe’ is created. It is formed on the basis of the algorithmic abstraction of human awareness, comprehension and understanding.
Now You See Me: Re-appropriating the Visual Landscape of Our Digital World is an interactive-video installation reflecting upon the layer of absurdity that occurs within the effort to mimic human perception with algorithms.
Using real-time object-detection model the installation brings the world of abstraction to our physical space by placing the viewer within the flat landscape of mathematical observations.
Complex human activities can be characterised as compositions of simpler actions. A person is a "set of informative skeletal joints" that "present a continuous evolution of spatial configurations".
Ground Truth is a speculative study in computer vision and an attempt to challenge the current notion of "human", adopted by many developers. Training convolutional neural networks in detecting human activity and classifying human instances, their traits, and behaviour patterns, receives sustained attention nowadays, mainly by military forces and private corporations.
Behaviour templates are projected on individuals and crowds, with the main goal of recognising patterns of interactions, anomalous events, and deviant actions.
By building a vocabulary of activities, machine learning datasets come to represent clichés of social constructs and abstract our personal experiences into list of annotated data. This has the potential to enforces conformity to a single prevailing mode and ideology of power.
Ground Truth is a collection of camouflage techniques meant to render individuals invisible in plain sight. More precisely, an exploration of mechanisms for escaping the algorithmic uniforms we involuntary acquire. Based on military and natural camouflage, the series of movements proposes alternative modes of social presence and interactions, meant as performative tools for masking visibility in order to confront such predictive technologies.