rOpenSci post-doc hacker Jeroen Ooms has just released a cool new package,
av, that he wrote “will become the video counterpart of the magick package which [rOpenSci uses] for working with images.".
av provides bindings to the FFmepg libraries for editing videos. It’s already become a renderer for
gganimate by Thomas Lin Pedersen, but
av allows more than making videos of graphics. In this post, I’ll show how to use
webshot to make a trailer/sneak preview of a slidedeck, i.e. a short video featuring the first few slides on music!
Why make a trailer of your slidedeck?
No, making a trailer of a slidedeck isn’t only a pretext for trying out
av! In case you hadn’t noticed, I’m all about promoting your work once you’ve put in the work. I’ve even written a whole post about blog promotion. Promoting a slidedeck after a talk, or a talk before the event, is quite similar to promoting a blog post both in terms of benefits and technically. In particular, a tweet about it will be more powerful if featuring an illustration, and why not making it animated?
Capture the first few slides
In this post, I’ll use the slidedeck that Jeroen shared after giving a talk about R infrastructure after the uRos conference. I’ll capture the first few slides minus the ones where he introduced himself, because I wanted the focus of the trailer to be on the talk topic. I didnd’t know
webshot::webshot() was vectorized, thanks to Jeroen for telling me that which shortened the code below.
fs::dir_create("slides") urls <- sprintf("https://jeroen.github.io/uros2018/#%d", c(1:2, 5:10)) webshot::webshot(urls, 'slides/webshot.png')
At that point, I had a folder with 8 PNGs. If your talk isn’t html, but say Power Point, you could export to PDF, and use the rOpenSci
pdftools package to render slides to images. Note that the PNGs I got didn’t have the right font, which is something I could solve by using decktape instead of
webshot, to render the html slides to PDF and then to PNG with
Create the video
The video creation was extremely simple and fast once I had selected a soundtrack. Note that having a soundtrack is optional!
Jeroen’s talk was about building stuff so I looked for construction sounds and selected this sample published by Audio Hero on Zapsplat
slides <- fs::dir_ls("slides") slides <- c(slides, slides, slides[length(slides)]) av::av_encode_video(slides, # it could be a fraction! framerate = 1, # if too short there'll be silence # at the end # if too long it'll be truncated audio = audio, output = "trailer.mp4")
In this post, I was able to write R code to take screenshots of the first few slides of a slidedeck, and to combine them in a video with sounds related to the slidedeck topic. That new
av package is quite promising!
Regarding accessibility of the produced trailer, on Twitter you can’t add an alternative description to a video, so it might be nice to either replace the music with a voice over, or to tweet once with music/sound and once with voice over? I guess the tweet itself can also help counterbalance the information lost by being informative and featuring a link to the actual slidedeck. If you’re as much into listening to your own voice as I am into listening to mine (i.e. not at all!), you might want to try rOpenSci’s
googleLanguageR package that helps use Google text to speech API.
Finally, if you use a cool Xaringan theme like Emi Tanaka’s ninja theme, you’d a bit disappointed to create a preview that loses the nice animated transitions. Luckily, in his tech note Jeroen promised “In future versions we also want to add things like screen capturing and reading raw video frames and audio samples for analysis in R.” so one could depend on screen capturing rather than screen shooting. I’m looking forward to future versions of
av, and encourage you to give it a try!