Make one of those random response face effects on Instagram

No code necessary! Send me what you make I want to try it

Kawandeep Virdee
7 min readFeb 28, 2020
A happy cartoon person with their arms raised and an orb above them

In this tutorial I’ll share some resources to make a face effect of one of those random image pickers that hovers above your head. If you make one, I’d love to try it. At the end of the tutorial I’ll include a link to mine.

The random image picker meme often takes the form of “Which x are you?” as in “Which Disney character are you?” When you use a face effect with this format, it begins with an image hovering above your head. Usually there is a question on this image. When you start a video recording, the image starts cycling through response before settling onto one. In some cases it might feel like the app is analyzing you to come up with a response. It’s actually just random. The response is served above your head. You react in joy or dismay, and share this amusing video with your followers.

Using Spark AR

Spark AR makes it easier to create effects with facial detection and augmented reality. Specifically, effects that overlay on the face using your phone. What’s pretty compelling is that your effects can get wider visibility and distribution through Instagram’s enormous network. It’s pretty fun, outside of that lingering creepy factor of face data being harvested by Facebook.

I like the layers of abstraction for Spark AR. It has a graphical interface to import assets, like images or objects, to use in your effects. You can go deeper into a patch interface, and even deeper into a coding interface. This makes the creation process much more welcoming since you don’t have to know how to code to make an effect. You don’t have to know computer vision or machine learning. If you do know how to code, you could spend more time with other parts of the creative process— like the asset creation for instance.

Make your assets

The format of the random picker meme is a title followed by a randomly chosen response. So to begin, think about what your title is. This can be a question or a prompt. Make an image to represent this. Follow this up with your responses. For each, make an image of what the response is.

I drew my title on the top left, and followed up with a series of responses. My face effect will be a randomly chosen activity of care for oneself.

Simple sketches of various activities of self care
I love the look of hand drawn sketches. It’s got that DIY zine illustration vibe.

I imported these sketches into a photo editor and created separate images for the title and each response. Here are the responses. They’re lovely 🤗 Take a moment to do one of them right now.

Whatever format your images are in, export them as jpegs. You don’t want them to be too large. In the last step you export the face effect, and if it’s too large you won’t be able to publish it. It’s recommended to make the project smaller in size in order to improve its distribution.

Build the face effect

If you haven’t already, download Spark AR and hop into it. I found a great tutorial for this effect, so rather than relay it here, I’ll just direct you to it. Watch this.

After this video you will have imported your assets and put together the animation and trigger. You’ll be able to test your effect now in Spark AR. There’s a couple of important changes to make before publishing.

Trigger the randomizer once you hit record

Rather than use a screen tap as the demo suggests, use the camera recording feature as a trigger to start the animation. This is how most of the effects that follow this meme work. This way the reaction of the user is captured and they can choose to share this as well. Usually this is the most entertaining part.

In the Scene window on the top left click Camera. Under the Camera details on the right, under Interactions you’ll see Producer Patch. Click Create. In your patch editor you’ll see a giant block labeled Camera.

Now you’ll switch some of the patches. The animation sequence won’t change, but the trigger part will. I’ll share what it looks like and talk through the different steps.

Here’s the whole project. Don’t worry about the details in the global image. Everything on the bottom— i.e. the animation sequence part—will stay the same. So we’re going to zoom in on the top part of the global image.

A screenshot of the patch view to the project, a global look
Again, don’t worry about the details here. This is the global view for context. Look at the one below.
Zooming into a specific part of the project in the patch view
This is where the changes are.

The bottom node of the Camera patch is the video recording output (thanks to this similar tutorial for introducing me to the Camera patch). Connect this with a Delay. You can tune how long the delay is. The reason you want to add a delay is because Instagram has a delay in how long the Camera button is held before the video recording is triggered. If there is no delay, the animation will start immediately and some of it will be lost in the recording. We want to ensure that some of the title card is included in the recording. This way the user’s followers will see what the prompt is before a response is delivered in the recording.

Connect the delay to a Pulse. This changes the output from a trigger to a boolean, which the Switch requires as input. Connect this to the Switch patch. Instead of connecting the responses to the Switch patch as in the above tutorial, I moved it to the delay. This is because after the delay you want to have the responses (in this case the animation) visible.

Test it on your phone

See how it feels on your phone. Try out both the Spark AR Player and also on Instagram. Share the link with a few friends and get feedback. This is the stage where I realized I needed a delay and tuned it. At this stage, record the video you’ll use in the effect submission. Here’s my video.

Make an icon

You’ll have to make an icon. Download the templates here as a reference. I‘ve seen a wide range of icons. Anecdotally from the few effects I’ve published it seems that the icons that were better designed were reviewed faster and also received more usage. However, I’ve seen a ton of different styles of icons while browsing other effects, so maybe it doesn’t make a difference ¯\_(ツ)_/¯.

Publish your effect

Now you’re ready to publish. You can do this through the editor under File>Upload. Follow the guidelines closely. You’ll use your icon and video in your application process and submit it.

At this point you’ll wait. For me the time has varied from 4–5 days to 10 days. For others it’s been a month and the effect hasn’t been reviewed. It’s hard to say what determines this, but if you follow the guidelines and make something original that will help.

I’ve had an effect rejected before. I got feedback on what to change, and made those changes. After resubmitting, the effect was accepted, yay!

Share it

After your effect is accepted you can share it. When you’re on the effect screen about to use the effect, click the details menu, and then “More”. You’ll see a “Share Effect Link” option. Here’s mine, try it out, and tag me at @whichlight so I can see it too. And then make your own effect too I want to try it.

--

--

Kawandeep Virdee

Building. Author of “Feeling Great About My Butt.” Previously: Creators @Medium, Product @embedly, Research @NECSI. http://whichlight.com.