We all know these old formal photos: people standing, with a solid background behind them, and a serious facial expression. Up until the middle of the 20th century, smiling was not a norm in portrait photos. With the democratization of the camera and the will of people to save memories from happy times, people started to smile more in photos. This begs the question: are we really happy in those photos or is smiling a western social norm? Can strangers look at our photo album and understand what we felt when the pictures were taken? They may guess, but they will never be 100% accurate. In a world that relies more and more on artificial intelligence, is it correct to assume that computers can be more accurate? This topic is at the forefront of technological research and a kind of "golden goose" for big technology companies.
In my thesis, I tried to examine whether there is a connection between emotions and facial expressions, and if so would AI be able to classify how we feel. Today, AI can see and understand us as emojis (emojification): smiling, crying, frowning. These expressions, of course, do not cover our full emotional spectrum. We can frown when we are angry but also when we are focused; smile when we are happy but also when we are nervous. So these tools can’t really understand how we feel but they can recognize our facial expressions and even fake them. The question is: are we, thanks to these tools, creating more fake photos or did we always fake how we feel in the moment that the camera took the picture? “Make it till you fake it" is a tool that allows people to disconnect from social norms and from the desire to please the photographer ("Say cheese!") and the audience of the photo. The tool allows people to re-examine their past photos and to ask questions: “Was I feeling happy here?” or “Why am I not smiling in this photo?”. With the tool they can match their faces in the photo to the actual memories they have from a particular moment. I’m hopeful that it will raise the question: what is fake and what is real? Photos with fake facial expressions or photos that were edited but do not match our feelings.
The possibility of changing the past can be a great option these days as a contrast to the social media culture which mandates the constant need to show ourselves as successful, happy people. However, it can also be a destructive tool that changes our collective memory and produces fake information. The tool and the archives that I created make us confront the abilities of contemporary AI: The bias in pre-trained models, the positive possibilities, and the negative consequences.
I started my research by exploring whether there is a connection between facial expressions and emotions and whether AI can detect our emotion by analyzing facial expressions. It turned out that computers are not able to classify how we are feeling from a facial expression for a simple reason - we, as humans, are not able to completely understand how people feel from just looking at their faces. During my research, I realized how machine learning works and how much this technology is based on data that humans provide - both the pictures and the labels. Since the data is based on human tagging, it can create a non diverse model. For example, you can create a model by asking people to analyze photos of “happy people '' but happiness means different things in different cultures so what is considered to be “happy” in this model is only true in the culture of the people that analyzed photos for this model. I examined a few models and explored their abilities related to facial expressions. The models that intrigued me the most were the ones that can change facial expressions. It reminded me of a work by Matt Loughrey that added smiles to pictures of genocide victims. By doing that, he completely changed the narrative of the story.
I then invited my friends and family to explore their photo albums and ask themselves if they have photos in which their facial expressions do not match their feelings in that photo. They didn't need to dig deep. Apparently, like I suspected at the beginning of my research, people don’t always show how they feel in photos. In some of the photos people tried to keep a serious face even though they were happy. In others, people were putting on a big smile although they were really sad.
I used the models that I found on the photos I collected and curated them for an online exhibition. The exhibition allows us to change our private history. But these tools can also be destructive, change our collective memory and produce fake information. In order to show the problematic side of these tools I created a book that shows real photos from the recent conflict in Ukraine and the results of editing them through the tool. The original photos bring out certain emotions in people and affect their opinion of the situation, but the edited photos can cause the opposite reaction and sway opinions. There is no text and the design intervention is minimal in order to allow the viewer an objective point of view of how easy it is to create a different emotional response using AI-based editing tools.
For this project I used 2 models: CLIP_prefix_caption and HyperStyle. I used Co-lab to run the models. For the website I used CSS, HTML and Javascript. The book and website was designed by Figma, photoshop, illustrator and indesign.