VFX Blog
VFX Fundamentals
We were tasked with providing a vfx action sequence for the
robot invasion project. In order to achieve this, we need to know how to
control the camera with regards to the brightness of the image and the
sharpness of the image as these are fundamental requirements for any
production. For example it is no good having a very blurry image which is very
nearly black, we need to see something adequately sharp and light enough to be
able to see.
The factors influencing how bright or dark a particular
picture turns out involve the lens aperture (how wide the hole in the lens is,
similar to the pupil of an eye), and the sensitivity of the film or camera
sensor to light.
Many years ago, before digital cameras, film was used which
was composed of silver halides and the size of the silver halide granules gave
different properties to the film. Smaller crystals reacted slower to light but,
like many pixels on a screen give a better resolution. Larger crystals reacted
faster to the light, but as you can imagine a monitor with only 10 pixels does
not give a very good picture. Thus there is a trade-off between quality and
light sensitivity and it is quite an art to be able to select the right
sensitivity of a film to different lighting conditions. Nowadays with digital
sensors similar rules apply and camera sensors have an ISO rating similar to
that of films which have been used in the past.
Obviously as we were working outside, we had to use
available sunlight and clouds, shadows, etc. would restrict the amount of light
from the sun. Thus, we had to modify our choice of ISO setting of the camera.
Once an ISO setting is achieved we have to then pair this with a shutter speed
or shutter angle in the case of a cinema camera. Thus, if we have an ISO
setting of say 100, we would have to allow the shutter to be open longer to get
an adequate amount of light.
Another factor already mentioned is that of aperture and we
could simply widen the aperture to get more light to the film, however this
will decrease the depth of field meaning that not all objects will appear sharp.
As you can see achieving correct exposure is a balancing act between ISO
setting, lens aperture and shutter speed.
One of the factor to take into consideration is that of
colour balance. Light actually has a temperature and will give slightly
different tints at different temperatures. If you take an old camera with
ordinary film that it is designed to be used outside in the sunlight and take a
photo inside using an ordinary tungsten light, the complete picture will appear
with an orange tint because the tungsten light gives out a different
temperature of light to that of the sun. The sun is said to be a light
temperature of 5400 K whereas tungsten light is more like 2900 K. Nowadays
colour balance is more or less automatic with digital cameras, but many years
ago this had to be set by the camera man.
Lenses are a bit different to the human eye and some form of
adaption is required to be able to see things nearer or further away, this is
done by changing the focal length of the lens. In most cases this will involve
moving the elements of the lenses to different positions, either by electric
motor or simply manually turning a ring on the lens to change the focal length
and zoom the lens from one focal length to another.
There are several different categories of lenses. Standard
lenses are designed to be more or less what a normal person’s eye would see,
telephoto will bring the subject closer and make the subject look bigger and
super telephoto will do this much more so. Using a telephoto lens means that less
of an angle of view will be seen for example a 400 mm lens has only a 6 degree
view angle. Conversely wide angle lenses give a wider angle of view than a
standard lens where a standard lens has an angle of view of about 50 degrease,
wide angles would go to 90 and even 120 degrease which allows better landscape
photography.
A 360 degree camera records HDRI (high dynamic range
imaging) to enable the lighting conditions to be replicated using Sky dome in
Maya.
Preproduction and planning
In order to achieve success with any project, it is highly
recommended to plan each stage to avoid problems. For example without a risk
assessment and subsequent warning to personnel, an accident could leave the
company open to litigation. In addition planning using Gantt charts would allow
most people to see at a glance where the project should be on a timeline which
will enable deadlines to be met where appropriate. Thus, we started with a
Gantt chart for planning purposes following with risk assessment to ensure that
any litigation is minimised and a shot list for filming to ensure that plans
can be followed.
![]() | |
| Gantt Chart |
![]() | |
| Risk assessment |
![]() |
| Risk assessment |
![]() |
| Shot list |
Production
Having correctly planned the shoot, we were able to go out
and follow the shot list adhere to the risk assessment and try to stay within
the Gantt chart scheduling. This involved taking all the equipment and shot
list to the required location and using first the cinema camera to shoot the
scene(s) before using a 360 degree Fly camera to capture the environment for
image based lighting.
Post Production
Post production was dealt with in several stages, the
primary stage being that of the robot. Obviously one of the cutest robots
around is the BB8 from Star Wars and although we were given this to model,
there would have been an uproar if this had not been chosen for us. My BB8
lookalike was modelled in Maya and then textured in Photoshop before some
simple rigging which involved a nurbs circle upon which the body and head were
parented to. Following this, a small amount of simple animation was done in
Maya. Although the animation for the robot was simple, it was necessary to
match this with the scene we had shot and thus I created a simpler depiction of
the scene using cubes and a plane to give some idea of the final scene would
look like. This enable the robot to be sized and placed correctly for the final
effects.
The next stage of post production involved editing of the
camera footage in Adobe Premiere, where we took the existing footage which was
a little dark and gloomy due to the weather conditions and using Lumetri Color
brought life to the scene with a bit of colour. These shots were then exported
as png sequences to use as a vfx plate in Nuke.
Moving on from the robot and general background, we then
started looking at the phone artwork. This was done using Adobe After effects
and Photoshop. This involved taking the measurements of the phone screen,
creating a similar piece of artwork in Photoshop with some text and then animated
in Adobe After effects to match the live scene. This artwork was then placed in
the scene using Nuke software and the thumb was rotoscoped to ensure that
continuity was maintained.
Finally, the robot is introduced into the Nuke software
where it is rotoscoped to appear in the correct place (behind a building).
Upon reflection, the weather was barely adequate for the lighting needed to film well and this necessitated extra post production work to brighten the scene which was useful to learn the techniques involved. The location was good in parts in that the robot appearing from behind the buildings was ideal, however the large expanse of concrete at the front did detract slightly from the filming and any future projects should take this into account.
The rigging and animation was good practice as it was not very taxing and most of the effort could be placed upon the other requirements.
I found the project interesting due to the different technologies involved and I am pleased with the finished product which seemed to go well without any major problems.










Comments
Post a Comment