In this part 2, I share another example of empathy at work: creating a video for guests of a surprise party to explain how local and remote guests (joining via technology) can work in concert.
Photography: All video and photography copyright Jason Kyle Frank. All rights reserved.
Music: “Sleepy Jake” and “Payday” by Silent Partner (YouTube audio library)
The making of this video
As a creator, I always enjoy seeing a behind-the-scenes look at how things are made. So I decided to share just a few of things that went into making this video.
Since this video required the use of a very wide-range of technologies and creative endeavors, I thought that this video would be particularly interesting for you to see behind the scenes. It involved things like lighting, audio processing, graphics, color grading, animations, and more!
Please let me know in the comments if you found this section valuable!
How did I light the scene?
The natural lighting that day was mostly-cloudy, but still having some directional light. From my seated position, the light was coming down at an angle from behind me. So I used a gold reflector to bounce some of that light back into my face. I also added one of my softbox lights.
How many tries does it take to record a “keeper” 10-second clip with your 5-year-old son?
About this many:
How many tries does it take for me to record a “keeper” 2-minute clip by myself?
Sadly, it can take as many as the previous case, or more. 🙂
How did I color-grade the footage?
I shot the footage with my Samsung GS4 smartphone. The un-graded footage needed exposure adjustment and color changes to make it richer and more cinematic:
As you can see in the next screenshot from Final Cut Pro, the biggest exposure adjustment was bringing down the midtones:
Once the exposure had been adjusted, I tackled the colors. Here I employed a trick help separate the subject (me in this case) from the background: I subtracted blues from the midtones, while adding them to the shadows.
This strategy does two things:
- It warms up the skin tones, which were in the midtones (subtracting blues has the effect of increasing yellows).
- It makes the background hues, which were in the shadows, contrast more with the subject. Thus the subject and background appeared more separated.
How did I record the audio?
I believe most filmmakers would agree that, between audio quality and image quality, if one of them had to suffer some, it should be the image quality (I’m speaking strictly about image quality – not about lighting!). Why? Because if the audio of spoken dialog does not sound “direct” and “full”, then the overall quality of the film is usually perceived as poor.
As such, before investing in an upgraded camera, I have invested in audio gear.
One of the biggest problems to overcome in recording dialog is getting a microphone close to the subject. Generally, if the microphone is more than just a couple of feet away from the subject, the audio will either pick too many room reflections, or pick up too much noise.
So here’s how I overcome that issue.
I record my voice using a lavaliere mic clipped to my shirt. That mic is plugged into a Zoom H5 recorder, which has good preamps. The preamps in the H5 will generally have a much lower noise floor than most any camera’s audio preamps.
While I’m recording video, I either put the H5 in my pocket, or I lay it next to me, as I did in this shot.
Then in Final Cut Pro, I sync up the audio that was recorded via the H5 with the audio that my phone recorded.
In the case of this shot, there was a lot of wind noise. My biggest tool to reduce that noise was Final Cut Pro’s expander. It acts like the opposite of a compressor, by tapering-off sound that is below a threshold. But to get the sound into an overall starting state before feeding it into the expander, I actually had to compress it some first, so that the overall sound was level-enough to have a predictable threshold setting to use for the expander.
After that, I applied a very small amount of noise gate. And then lastly I applied another instance of a compressor, this time acting mostly as a limiter.
In this next screenshot you can visually see how much the wind-noise was decreased:
How did I create the animated graphics?
To create the intro logo animation, I used Final Cut Pro’s masking and keyframe animation capabilities. I used two copies of the logo, one which animates the top half (“Thriving”) and the other which animates the bottom half (“Creator”):
For the bulk of the graphics in this video, I used Keynote as a primary animation mechanism. Its Magic Move transition was the main animation tool that I used.
For actual graphics which were being animated, I used a combination of custom-created graphics (created in Sketch), shapes available in Keynote, and brand-assets downloaded from asset webpages by Facebook and Google:
Start with the end in mind
One of the biggest tips I can offer with how to create these multi-part animations is start with the end in mind. In fact, you don’t just start with it in mind, you literally want to create the ending state first.
Create, size, and position all of the graphics as they will need to end up when all of them have been animated in (which can happen over a sequence of lots of slides). We do this to allow the Magic Move transition to know where to animate things to:
Next, duplicate that ending-state slide, and move it to a previous slide position. Then transform all of the graphics to get them into their starting-positions (how they should appear before they are animated in):
Now Magic Move understand that these objects, which appear on different slides, are actually supposed to be the same object, but just in different sizes and/or positions. It can therefor effectively do its “magic” with the animation.
After creating the starting-state slide, you can duplicate that slide and change the graphics one step at a time (each time on a new duplicated slide) to create your sequence.
How to use Keynote for video
As Keynote is meant to be a tool for live presenting, the last issue to tackle is how to use it for a video like this.
To handle that, I recorded a screen-capture of the narration “live” as I advanced the slides manually. This ensured that the timing of my narration and the slides synced perfectly.
I hope that you’ve found this behind-the-scenes look valuable. What else would you like to know about how I make my videos, blog posts, or creative processes?