How to digitize old VHS videos with an EasyCAP UTV007 USB converter on Linux

2018: VHS is dead! If you don’t have a functioning VHS player any more, your only option is to buy second-hand devices. But if you still have old, valuable VHS videos (e.g. family videos) you should digitize them today, as long as there are still working VHS players around.

Our goal is to feed the audio/video (AV) signals coming out of an old VHS player into an EasyCAP UTV007 USB video grabber, which can receive 3 RCA cables (yellow for Composite Video, white for left channel audio, red for right channel audio).

EasyCAP UTV007 USB video grabber

VHS players usually have a SCART output which lucklily carries all the needed signals.

SCART connector

Via a Multi AV SCART adapter you can output the AV signals into three separate RCA cables (male-to-male), and from there into the EasyCap video grabber. If your adapter should have an input/output switch, set it to “output”.

Multi AV Adapter outputting 3 RCA connectors (yellow for Composite Video, white for left channel audio, red for right channel audio)

The EasyCAP USB converter uses a UTV007 chip, which is supported by Linux out-of-the-box. (Who said that installing drivers is a pain in Linux???) After plugging the converter into an USB slot, you should get two additional devices:

  1. A video device called “usbtv”
  2. A sound card called “USBTV007 Video Grabber [EasyCAP] Analog Stereo”

Too see if you have the video device, run v4l2-ctl –list-devices . It will output something like:

usbtv (usb-0000:00:14.0-7): 
       /dev/video0

To see if you have the audio device, run

pactl list | grep -C 50 'Description: USBTV007'

It will output something like:

Source #1
        State: SUSPENDED
        Name: alsa_input.pci-0000_00_14.0-usb-0_7.analog-stereo
        Description: USBTV007 Video Grabber [EasyCAP] Analog Stereo
        Driver: module-alsa-card.c
        Sample Specification: s16le 2ch 48000Hz
        Channel Map: front-left,front-right
        Owner Module: 7
        Mute: no
        Volume: front-left: 65536 / 100% / 0.00 dB,   front-right: 65536 / 100% / 0.00 dB
                balance 0.00
        Base Volume: 65536 / 100% / 0.00 dB
        Monitor of Sink: n/a
        Latency: 0 usec, configured 0 usec
        Flags: HARDWARE DECIBEL_VOLUME LATENCY 
        Properties:
                alsa.resolution_bits = "16"
                device.api = "alsa"
                device.class = "sound"
                alsa.class = "generic"
                alsa.subclass = "generic-mix"
                alsa.name = "USBTV Audio Input"
                alsa.id = "USBTV Audio"
                alsa.subdevice = "0"
                alsa.subdevice_name = "subdevice #0"
                alsa.device = "0"
                alsa.card = "3"
                alsa.card_name = "usbtv"
                alsa.long_card_name = "USBTV Audio at bus 3 device 3"
                alsa.driver_name = "usbcore"
                device.bus_path = "pci-0000:00:14.0-usb-0:7"
                sysfs.path = "/devices/pci0000:00/0000:00:14.0/usb3/3-7/sound/card3"
                device.vendor.name = "Fushicai"
                device.product.name = "USBTV007 Video Grabber [EasyCAP]"
                device.string = "hw:3"
                device.buffering.buffer_size = "22120"
                device.buffering.fragment_size = "11060"
                device.access_mode = "mmap"
                device.profile.name = "analog-stereo"
                device.profile.description = "Analog Stereo"
                device.description = "USBTV007 Video Grabber [EasyCAP] Analog Stereo"
                module-udev-detect.discovered = "1"
                device.icon_name = "audio-card"
        Ports:
                analog-input: Analog Input (priority: 10000)
        Active Port: analog-input
        Formats:
                pcm

To quickly test if you are getting any video, use a webcam application of your choice (e.g. “cheese“) and select “usbtv” as video source under “Preferences”. Note that this will only get video, but no audio.

We will use GStreamer to grab video and audio separately, and mux them together into a container format.

Install GStreamer

To install GStreamer on Debian-based distributions (like Ubuntu), run

apt-get installgstreamer1.0-tools gstreamer1.0-alsa gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly

Test video with GStreamer

Now, test if you can grab the video with GStreamer. This will read the video from /dev/video0 (device name from v4l2-ctl –list-devices above) and directly output in a window:

gst-launch-1.0 v4l2src device=/dev/video0 ! autovideosink

Test audio with GStreamer

Now, test if you can grab the audio with GStreamer. This will read the audio from the ALSA soundcard ID hw:3 (this ID comes from the output of pactl list above) and output it to PulseAudio (should go to your currently selected speakers/headphones):

gst-launch-1.0 alsasrc device="hw:3" ! pulsesink

Convert audio and video into a file

If both audio and video tested OK separately, we now can grab them both at the same time, mux them into a container format, and output it to a file /tmp/vhs.mkv. I’m choosing Matroska .mkv containing H264 video and Ogg Vorbis audio:

gst-launch-1.0 -e \
matroskamux name="muxer" ! queue ! filesink location=/tmp/vhs.mkv \
v4l2src ! queue ! x264enc ! queue ! muxer. \
alsasrc device="hw:3" ! queue ! audioconvert ! queue ! vorbisenc ! queue ! muxer.

Record some video and then press Ctrl+C. The file /tmp/vhs.mkv should now have audio and video.

It would be nice if we could see the video as we are recording it, so that we know when it ends. The command below will do this:

gst-launch-1.0 -e \
matroskamux name="muxer" ! queue ! filesink location=/tmp/vhs.mkv async=false \
v4l2src ! tee name=mytee \
mytee. ! queue ! x264enc ! queue ! muxer. \
mytee. ! queue ! autovideosink \
alsasrc device="hw:3" ! queue ! audioconvert ! queue ! vorbisenc ! queue ! muxer.

You also can re-encode the video by running it through ffmpeg:

ffmpeg -i vhs.mkv -vb 700k -ab 100k seekable-vhs.mkv

You can adjust the video and audio bitrate depending on the type and length of video so that your file will not be too large. The nice side-effect is that the coarser the video encoding, the more of the fine-grained noise in the VHS video is smoothed out.

Voila! You now should be able to record and archive all your old family videos for posterity!

Digitization of VHS video with Gstreamer.

PhantomJS alternative: Write short PyQt scripts instead (phantom.py)

For a project of mine, I needed a ‘headless’ script that renders dynamically generated HTML (via JavaScript) to a PDF file. In looking for existing headless browsers, I found PhantomJS, CasperJS and similar projects.

PhantomJS looked most promising, but it had bugs related to the CSS @media type ‘print’ which made the project useless for generating properly formatted PDFs.[1][2] These issues are still open after three years. As of 2017, the official PhantomJS binary download is still at version 2.1 which includes an older downstream QtWebKit version 5.5.1 [3].

In trying to fix the bugs myself, or upgrade QtWebKit, I spent two full days trying to build PhantomJS, but eventually had to give up because the build system seems to be reworked right now. It seems that instead of patching the custom QtWebKit, the development team is moving towards making “PhantomJS a normal Qt application with system-installed Qt and QtWebKit”. [4]

So, I started questioning PhantomJS. What magic does it really do? By inspecting the source code, I found out that its entire concept hinges on just two Qt functions: QWebFrame::addToJavaScriptWindowObject() and QWebFrame::evaluateJavascript(). The former allows exposing Qt objects to the JavaScript space of WebKit. For example, PhantomJS’s JavaScript API method injectJs is simply a wrapper for an underlying evaluateJavascript call inside of a C++ function of a C++ object which has been previously attached via addToJavaScriptWindowObject.

In short, PhantomJS’s custom (and in my opinion rather sparse) JavaScript API simply calls Qt’s C++ functions.

The obvious disadvantage of this concept is that an extension or fixing of the API requires a recompilation of the C++ code.

It is obviously much simpler to directly script Qt, for example via PyQt.

So, I started writing a short PyQt application, and after just 90 lines of Python code, I had what I needed: a headless browser using an up-to-date version of WebKit, which did not have the shortcomings of the version in PhantomJS.

The script is published on my blog and as a Github gist. I don’t expect getting 20,000 Github stars like others, but honestly, following (perhaps too simple) logic, my phantom.py script should get more. 😉

References:

[1]: https://github.com/ariya/phantomjs/issues/12685

[2]: https://github.com/ariya/phantomjs/issues/13524

[3]: https://github.com/ariya/phantomjs/tree/2.1.1/src/qt

[4]: https://github.com/ariya/phantomjs/issues/14458#issuecomment-242391757

phantom.py: A lean replacement for bulky headless browser frameworks

This is a simple but fully scriptable headless QtWebKit browser using PyQt5 in Python3, specialized in executing external JavaScript and generating PDF files. A lean replacement for other bulky headless browser frameworks. (Source code at end of this post as well as in this github gist)

Usage

If you have a display attached:

./phantom.py <url> <pdf-file> [<javascript-file>]

If you don’t have a display attached (i.e. on a remote server):

xvfb-run ./phantom.py <url> <pdf-file> [<javascript-file>]

Arguments:

  • <url> Can be a http(s) URL or a path to a local file
  • <pdf-file> Path and name of PDF file to generate
  • [<javascript-file>] (optional) Path and name of a JavaScript file to execute

Features

  • Generate a PDF screenshot of the web page after it is completely loaded.
  • Optionally execute a local JavaScript file specified by the argument <javascript-file> after the web page is completely loaded, and before the PDF is generated.
  • console.log’s will be printed to stdout.
  • Easily add new features by changing the source code of this script, without compiling C++ code. For more advanced applications, consider attaching PyQt objects/methods to WebKit’s JavaScript space by using QWebFrame::addToJavaScriptWindowObject().

If you execute an external <javascript-file>, phantom.py has no way of knowing when that script has finished doing its work. For this reason, the external script should execute console.log("__PHANTOM_PY_DONE__"); when done. This will trigger the PDF generation, after which phantom.py will exit. If no __PHANTOM_PY_DONE__ string is seen on the console for 10 seconds, phantom.py will exit without doing anything. This behavior could be implemented more elegantly without console.log’s but it is the simplest solution.

It is important to remember that since you’re just running WebKit, you can use everything that WebKit supports, including the usual JS client libraries, CSS, CSS @media types, etc.

Dependencies

  • Python3
  • PyQt5
  • xvfb (optional for display-less machines)

Installation of dependencies in Debian Stretch is easy:

apt-get install xvfb python3-pyqt5 python3-pyqt5.qtwebkit

Finding the equivalent for other OSes is an exercise that I leave to you.

Examples

Given the following file /tmp/test.html:

<html>
  <body>
    <p>foo <span id="id1">foo</span> <span id="id2">foo</span></p>
  </body>
  <script>
    document.getElementById('id1').innerHTML = "bar";
  </script>
</html>

… and the following file /tmp/test.js:

document.getElementById('id2').innerHTML = "baz";
console.log("__PHANTOM_PY_DONE__");

… and running this script (without attached display) …

xvfb-run python3 phantom.py /tmp/test.html /tmp/out.pdf /tmp/test.js

… you will get a PDF file /tmp/out.pdf with the contents “foo bar baz”.

Note that the second occurrence of “foo” has been replaced by the web page’s own script, and the third occurrence of “foo” by the external JS file.

Source Code

"""
# phantom.py

Simple but fully scriptable headless QtWebKit browser using PyQt5 in Python3,
specialized in executing external JavaScript and generating PDF files. A lean
replacement for other bulky headless browser frameworks.

Copyright 2017 Michael Karl Franzl

Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

"""

import sys
from PyQt5.QtCore import QUrl
from PyQt5.QtWebKitWidgets import QWebPage
from PyQt5.QtWidgets import QApplication
from PyQt5.QtPrintSupport import QPrinter
from PyQt5.QtCore import QTimer
import traceback

  
class Render(QWebPage):
  def __init__(self, url, outfile, jsfile):
    self.app = QApplication(sys.argv)
    
    QWebPage.__init__(self)

    self.jsfile = jsfile
    self.outfile = outfile
    
    qurl = QUrl.fromUserInput(url)
    
    print("phantom.py: URL=", qurl, "OUTFILE=", outfile, "JSFILE=", jsfile)
    
    # The PDF generation only happens when the special string __PHANTOM_PY_DONE__
    # is sent to console.log(). The following JS string will be executed by
    # default, when no external JavaScript file is specified.
    self.js_contents = "setTimeout(function() { console.log('__PHANTOM_PY_DONE__') }, 500);";
    
    if jsfile:
      try:
        f = open(self.jsfile)
        self.js_contents = f.read()
        f.close()
      except:
        print(traceback.format_exc())
        self._exit(10)
        
    self.loadFinished.connect(self._loadFinished)
    self.mainFrame().load(qurl)
    self.javaScriptConsoleMessage = self._onConsoleMessage
    
    # Run for a maximum of 10 seconds
    watchdog = QTimer()
    watchdog.setSingleShot(True)
    watchdog.timeout.connect(lambda: self._exit(1))
    watchdog.start(10000)
    
    self.app.exec_()
    
    
  def _onConsoleMessage(self, txt, lineno, filename):
    print("CONSOLE", lineno, txt, filename)
    if "__PHANTOM_PY_DONE__" in txt:
      # If we get this magic string, it means that the external JS is done
      self._print()
      self._exit(0)
  
  
  def _loadFinished(self, result):
    print("phantom.py: Loading finished!")
    print("phantom.py: Evaluating JS from", self.jsfile)
    self.frame = self.mainFrame()
    self.frame.evaluateJavaScript(self.js_contents)
    

  def _print(self):
    print("phantom.py: Printing...")
    printer = QPrinter()
    printer.setPageMargins(10, 10, 10, 10, QPrinter.Millimeter)
    printer.setPaperSize(QPrinter.A4)
    printer.setCreator("phantom.py by Michael Karl Franzl")
    printer.setOutputFormat(QPrinter.PdfFormat);
    printer.setOutputFileName(self.outfile);
    self.frame.print(printer)
    
  def _exit(self, val):
    print("phantom.py: Exiting with val", val)
    self.app.exit(val) # Qt exit
    exit(val) # Python exit
    
    
def main():
  if (len(sys.argv) < 3):
    print("USAGE: ./phantom.py <url> <pdf-file> [<javascript-file>]")
  else:
    url = sys.argv[1]
    outfile = sys.argv[2]
    jsfile = sys.argv[3] if len(sys.argv) > 3 else None
    r = Render(url, outfile, jsfile)


if __name__ == "__main__":
  main()