live_video Data Exists
Live Video Reflector
In 1998, I was instructed to create a mechanism for serving live video that would be available
to any web viewer, without requiring plugins or 3rd party applications. The end result was
to be fast, reliable, and secure. It also needed to be able to provide fast service to the
20,000+ paying customers and a reduced service for several million other viewers for free through
it's parent web site.
There were several pieces that made this work.
Source
The source was a variety of hardware. The main source was an Axis Communications 240 Cam Server,
with 4 BNC ports for video input, and a network port for output.
Off the shelf Sony HandyCams were attached to each port.
Frames were collected via FTP from the device.
A no-brand video capture device, with Sony optics mounted in a steel case, with only
network and serial interfaces on the back. Frames were collected via HTTP from this device.
Some Linux workstations were used, with BT848 chipset video capture cards, attached to
Sony HandyCams were used.
Over time, the Linux workstations were replaced by Windows 98 workstations. This reduced
the number of BT848 inputs to one, but USB cameras replaced these. This decision was
made so the capture machines could also be used for normal Internet access by barely
computer-literate users. Frames were sent to the primary server via FTP.
Video Reflection
Frames were taken in by the primary server, and distributed to secondary servers via NFS mounts.
To avoid network contention issues, a second network was established only for the video frame sharing.
One set of servers provided service to the paying members, and a second set of servers provided service to the
free viewers. This was done pragmatically with Perl scripts. The free viewer servers only got one update
every 15 seconds, while the members server was updated as frequently as a new frame was available.
Client
The expectation of the site was that it would work for anyone with a web browser. While providing service
to so many people, it was well known that no matter how well you document something, it won't be read.
This had already been provide so many times simply by putting a link like "CLICK HERE TO CONTINUE" on
pages which required a click, but users would email asking how to get to the next page. While it sounds
stupid, when dealing with a large user base, even a small percentage of people make for a lot of support calls
and emails.
The client was initially written as a Java applet. It took several revisions to make it work fluidly on
all platforms we encountered. Internally, we tested on everything we could. This included MSIE 3.x, 4.x, 5.x,
and Netscape 4.x on Windows 95/98/NT, Mac, Linux, and Irix. The occasional trouble report came in, which
were handled very rapidly. Over the first month of operation, the problems were reduced to "Enable Java in your
browser".
After about 2 years of operation, some more than enthusiastic users decompiled the java applet, found the
frame source, and started distributing their own java applets to pull our video feeds. Countermeasures were
put in place in the following days, which started as a set of hourly rotating keys for the frame source. The enthusiastic
users scripted their decompiling, to find the key and automatically update their own applets. We expanded
the key set to 10 keys, of which only one was really used and 9 were decoys. This slowed the enthusiastic
users down dramatically, and gave us quite a bit of breathing room.
In the following weeks, the java applet was abandon, and replaced with javascript, which was able to use
better security through the browser. This remained undefeated for the lifetime of the site.
In both cases, frame swapping was utilized. While one frame was shown, a second frame would be downloaded,
and then swapped, to give the user the appearance of fluid motion. This is consistent with what we see in movies
and television. Movie theaters
generally show their video at 24 fps. NTSC
(American television) uses 29.97 fps. PAL uses 25 fps.
Our effective frame rate varied based on different factors. On the source side, this became limited by:
- The available bandwidth from the broadcast site (T1 - 1.544Mb/s)
- The capability of the capture device (4 to 8 fps)
- The size of the captured frame (320x240, varying filesize based on compression and content)
- The number of cameras (up to 40)
- The utilization of the line for other uses. Frequently, the source side would watch the feed also, and
do other tasks, such as web browsing or chat.
We optimized out reflector farm to handle anything that was thrown at it. If we were coming past 80% utilization
on any part of it, extra reflectors were added. They were configured in such a way that they could easily be added
or removed, and spares were kept on hand idle.
The client side was an extremely limiting factor. International viewers frequently had very slow frame rates, due to
line quality. The other major problem was the line quality of the end user. At the time, quite a few users could only
use dial up service, and had no accessibility to the new DSL or cable modem services. While it would be thought that
this would slow business, users were impressed that they could get real live video, even if it was one frame every
few seconds.
Unfortunately, there are no surviving pictures of any of the hardware or software involved in this project. It is long since obsolete.
There was another live video project that I lead the technology portion, but management embezzled all the funds, and everything fell apart just before the site was to go live.