UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Optical benthic imagery survey in a lacustrine basin using an autonomous underwater vehicle Pike, Weston John 2011

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


24-ubc_2011_fall_pike_weston.pdf [ 26.71MB ]
JSON: 24-1.0063207.json
JSON-LD: 24-1.0063207-ld.json
RDF/XML (Pretty): 24-1.0063207-rdf.xml
RDF/JSON: 24-1.0063207-rdf.json
Turtle: 24-1.0063207-turtle.txt
N-Triples: 24-1.0063207-rdf-ntriples.txt
Original Record: 24-1.0063207-source.json
Full Text

Full Text

 OPTICAL BENTHIC IMAGERY SURVEY IN A LACUSTRINE BASIN USING AN AUTONOMOUS UNDERWATER VEHICLE  by  WESTON JOHN PIKE B.Sc., Thompson Rivers University, 2005  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES (Civil Engineering)   THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2011  © WESTON JOHN PIKE, 2011  ii ABSTRACT Photographs are needed to map and characterize fine-scale benthic features and underwater habitats. Acoustic imaging methods lack sufficient resolution, colour and the ability to define many low reflectance features. Other optical methods such as LiDAR also lack spectral information important in the identification of biological features.  Historically, photographs of benthic surfaces are collected over small areas or single line transects. Here techniques are developed and optimized to perform an extensive optical benthic survey remotely with an Autonomous Underwater Vehicle (AUV) over the area of a lacustrine basin. The technique was applied to surveys at Pavilion and Kelly Lake, B.C., and Lake Tahoe, CA, USA. The major challenges associated with the photographic surveys included overcoming AUV performance and stability issues associated with steep bathymetry, through-water light attenuation, limited light availability, and camera system limitations. Photographic imaging with a small AUV and CCD camera was optimized for the lacustrine environment, through manipulation of the non- optimal hardware. Benthic features were identified and mapped in Pavilion Lake, revealing profundal zonation patterns of previously unexplored epipelic flora.  iii PREFACE A small portion of the mapping efforts and exploration of Pavilion Lake described in this thesis was presented in ‘The Pavilion Lake Research Project – A deep dive towards the Moon and Mars”, submitted to Geological Survey of America (GSA), special issue (accepted)., The article is authored by Darlene Lim, Allyson Brady and the PLRP team of which Weston Pike is included. My contribution to the published work consists of a brief summary of my AUV survey activities and exploration findings. The summary draws from chapter 3 of this document. My contribution consists of one paragraph in the GSA paper (or about 1% of the manuscript). The contributions of the remaining 54 member PLRP team complete the manuscript.  iv TABLE OF CO
TS Abstract.............................................................................................................................. ii Preface............................................................................................................................... iii Table of contents .............................................................................................................. iv List of tables...................................................................................................................... vi List of figures................................................................................................................... vii Acknowledgements .......................................................................................................... ix 1. Introduction............................................................................................................... 1 1.1. Underwater imaging............................................................................................ 2 1.2. Platforms for collecting benthic images ............................................................. 6 1.2.1. Towed platforms ..........................................................................................7 1.2.2. Remotely operated vehicles .......................................................................13 1.2.3. Manned submersibles.................................................................................16 1.2.4. Autonomous underwater vehicles..............................................................21 1.3. Collecting in-water optical imagery.................................................................. 25 1.4. Light in water.................................................................................................... 26 1.5. Camera systems ................................................................................................ 31 1.5.1. Lens............................................................................................................33 1.5.2. Camera sensor............................................................................................35 2. Methods.................................................................................................................... 38 2.1. Imaging platform UBC-Gavia .......................................................................... 39 2.1.1. UBC-Gavia modules..................................................................................40 2.2. Camera system.................................................................................................. 42 2.2.1. Lens parameters .........................................................................................43 2.2.2. Camera sensor parameters .........................................................................47  v 2.3. Case Study 1: Pavilion Lake ............................................................................. 48 2.3.1. Image quality optimization ........................................................................50 Image analysis and rectification.........................................................58 2.3.2. Mission design ...........................................................................................67 2.4. Case Study 2: Kelly Lake ................................................................................. 73 2.5. Case Study 3: Lake Tahoe ................................................................................ 78 3. Results ...................................................................................................................... 81 3.1. Benthic photographic mapping: Pavilion Lake................................................. 81 4. Discussion................................................................................................................. 86 5. Conclusions.............................................................................................................. 91 References ........................................................................................................................ 93 Appendix: UBC-Gavia operating procedures ............................................................ 103 A.1 Assembly/ disassembly of camera system........................................................... 104 A.2 Additional calculations ........................................................................................ 106 A.3 Lens adjustments.................................................................................................. 107 A.4 Image scale calibration ........................................................................................ 110 A.5 Mission specific lens calculations........................................................................ 111 A.6 Camera configuration........................................................................................... 114 A.7 UBC-Gavia recommended settings...................................................................... 118 A.8 Additional image examples.................................................................................. 120  vi LIST OF TABLES Table 1 Performance capabilities and operational parameters of towed platforms ........................ 9 Table 2 Performance capabilities and operational parameters of ROVS ...................................... 15 Table 3 Perfromance capabilities and operational parameters of manned submersibles.............. 19 Table 4 Performance capabilities and operational parameters of AUVS...................................... 24 Table 5 UBC-Gavia camera system properties............................................................................. 45 Table 6 Mission summary............................................................................................................. 73  vii LIST OF FIGURES Figure 1 Imaging platforms used in benthic surveys ...................................................................... 7 Figure 2 Extinction coefficients for pure water ............................................................................ 28 Figure 3 Transmittance of light through pure water. .................................................................... 28 Figure 4 A digital underwater camera system. ............................................................................. 32 Figure 5 UBC-Gavia ..................................................................................................................... 40 Figure 6 UBC-Gavia camera view port ........................................................................................ 45 Figure 7 Pavilion Lake.................................................................................................................. 50 Figure 8 Optimized images from UBC-Gavia .............................................................................. 53 Figure 9 Altitude dependent imaging scale for ubc-gavia ............................................................ 56 Figure 10 Mission parameters affect on image quality................................................................. 57 Figure 11 Histograms of benthic images ...................................................................................... 60 Figure 12 Dark image ................................................................................................................... 62 Figure 13 Over saturated image.................................................................................................... 63 Figure 14 Unfocused image .......................................................................................................... 65 Figure 15 Optimized image .......................................................................................................... 66 Figure 16 UBC-Gavia mission tracks in Pavilion Lake................................................................ 68 Figure 17 Unimodal bottom following from open-water descent ................................................ 70 Figure 18 Bimodal bottom following down-slope........................................................................ 70 Figure 19 Bimodal up-slope bottom tracking ............................................................................... 71 Figure 20 Kelly Lake .................................................................................................................... 75 Figure 21 Images from Kelly Lake............................................................................................... 77 Figure 22 Kelly Lake bathymetery ............................................................................................... 78 Figure 23 Images from Lake Tahoe.............................................................................................. 80 Figure 24 Benthic non-network type classifications..................................................................... 82 Figure 25 Benthic network type classifications. ........................................................................... 83  viii Figure 26 Missions classified according to benthic type .............................................................. 84 Figure 27 Pavilion Lake Central Basin bathymetery.................................................................... 85 Figure 28 Image mosaic of a Deep Mound................................................................................... 89 Figure 29 A 2009 mission image from the deepworker submersible ........................................... 92 Figure 30 Nose cone lens removal.............................................................................................. 105 Figure 31 The lens removed from CCD camera. ........................................................................ 107  ix ACK
TS This work was accomplished with the much appreciated support of those around me. Dr. Bernard Laval offered an endless supply of advice and insight which was essential for me to keep moving up, in ideas and quality of work. Our discussions were enlightening and will always be of benefit to me.  Alex Forrest was always willing to provide answers to my questions concerning Gavia and things aquatic, I’m very thankful for his assistance. I also gained some great memories working with him through long days and nights on the water, where I experienced what can be accomplished through perseverance all while enjoying some amazing field sites. I have also greatly appreciated the enthusiastic support of Dr. Darlene Lim who believed in me and offered me unforgettable experiences at PLRP and NASA. Thanks to Andrew Hamilton who always had good humored and sound advice on pretty much anything, from this I benefited dually. Thanks to my parents and family for their genuine enthusiasm and interest in my work all along the way, and to Annika for encouragement, listening and putting up with my abstracts.   1 1. I
TRODUCTIO Benthic regions are the least known and mapped areas of Earth’s surface. Of the Earth’s surface, a vast 70% is covered by water and therefore comprises some form of benthic habitat. These regions have not been investigated to nearly the same extent as Earth’s (or for that matter the Moon’s or Mars’s) terrestrial environments; essentially the largest area of our planet’s surface remains unknown to human eyes. Opening this watery world to visual inspection through the use of cameras provides a high resolution means of exploring and surveying this expansive region of our planet. Unfortunately the effective use of cameras in the underwater world is a difficult task; the main challenges are associated with imaging in a harsh, remote environment, through a medium much denser than air. It is due to these challenges that we know more about the visual nature of celestial bodies millions of miles from us than of the water at our feet. The fact that the first underwater camera was used over a century ago serves to illustrate that despite the availability of the basic technology the associated challenges are still a significant barrier to fully realizing broad scale underwater imaging. Modern technological advances in underwater vehicles and automation have made underwater imaging more approachable and efficient. Yet despite this, extensive underwater imaging has only been performed on a negligible amount of oceans and freshwater benthic surfaces, most consisting only of simple single line transects. This thesis describes the approaches and procedures used to develop a photographic mapping protocol for autonomous submersible robots, which covers an area using multiple overlapping transects. In this manner not only are benthic features in transects described but an entire region can be examined at a photographic resolution. The outcome of the optimized procedures is  2 demonstrated by a basin wide map of benthic features identified through photographs in Pavilion Lake. 1.1. Underwater imaging Images are commonly acquired in unknown environments when initial inspection is the goal, facilitating the classification and characterization of imaged features. Analyzing the optical nature of the world is one of mankind’s most familiar tools. It may be for this reason that optical instruments are often the priority payload on unmanned exploration missions, including the first missions to land on the Moon Venus, Mars, and Titan. Images put the natural world into a context that we understand by our fundamental nature allowing us to recognize patterns and features upon which to base further investigation. It is perhaps due to this drive to describe and portray a new environment to his fellow man, that in 1893 Louis Boutan took the first successful underwater images in the French Riviera (Vine  1975; Mertens 1970). These first photographs were remarkable at the time in that he was able to successfully submerge a camera and a light source—collecting images even at night. A number of underwater cameras were developed over the subsequent 40 years, producing various results but all were confined to depths still accessible to divers. Deep sea imaging did not occur until the late 1930s when Ewing, Vine, and Worzel (1946) began using images for the scientific description of biological specimens, geological characteristics and oceanographic measurements in the deep ocean. These pioneers recognized the value of deep water photographs which would greatly increase our visual range in the ocean, opening to science for the first time glimpses of this vast region of our planet. Their very first images settled a long standing debate of whether or not ocean currents existed at depth by clearly capturing ripple marks in sediment. So began a long process of technological development aided greatly by the eventual development of the strobe by Harold E. Edgerton as well as camera digitalization, underwater vehicle development and design enhancements allowing deeper depths  3 to be reached. Yet even today efforts to image the ocean’s deepest benthic surfaces remain daunting. In May 2009, Nereus, a hybrid remotely operated/autonomous vehicle developed by the Woods Hole Oceanographic Institution, became only the third vehicle ever to reach the bottom of the Marianas Trench collecting photographs at a depth of 10,902 meters. With the entire world’s benthic regions within reach of current imaging technology, we are now left with the immense task of documenting the physical nature of these regions and the habitats and fauna that exist there. The motivation behind the exploration and documentation of Earth’s benthic regions is multifaceted. Today the application of deep water imaging is of interest to researchers in habitat analysis, species distributions, archeological explorations, geophysical explorations (iceberg score, hydrothermal vents, benthic morphology), as well as hydrodynamic and geomorphologic mapping. Essentially, optical imagery is desirable to those studies where high resolution inspection or classification of bottom features is required. Although optical images provide the highest resolution (in colour) they are not the only type of imagery available for describing benthic surfaces. Acoustic imaging is also an important and more widely used method for benthic studies. Most studies take advantage of the two methods together, yet must depend on optical imagery to describe fine-scale features.  A field to which high resolution images are of paramount interest is marine archeology. In this discipline considerable effort is made to develop detailed surveys of discovered artifacts in situ (Singh et al. 2000; Ballard et al. 2000; Ward and Ballard 2004; Green 2004; Delaporta et al. 2006; Newman et al. 2008; Ballard 2008). Whereas acoustic imaging methods may often be used in search efforts, the resolution available in optical images is necessary for the detailed study and documentation of target sites. Sites such as the wreck of the Titanic and the Bismarck (Ballard  4 2008) are two high profile examples of archaeological explorations which depended almost solely on the optical imaging capabilities of submersible vehicles for the detailed documentation of the wreck site. Another example includes the wreck of the S.S. Edmund Fitzgerald in Lake Superior which was examined through images taken with robotic submersible vehicles. Subsequent dives on the wreck by manned submersibles and observations made by passengers did not collect any additional details not already documented by the unmanned submersibles images (Braulik 2007). Optical images provide sufficient detail of abyssal benthic regions even for the most detail demanding fields. Optical sensors have also been employed in geophysical research; one of the first applications of underwater imaging was to investigate sedimentation processes in the deep ocean (Ewing et al. 1946). The results from this early study were immediately impactful when deep sea images revealed sand ripples on the ocean bottom, effectively putting to end the contention that ocean currents did not exist at depth. Another geophysical application of underwater imaging is in the discovery and mapping of hydrothermal vents and ocean seeps (White et al. 1998; Coleman and Ballard 2001; German et al. 2007; Melchert et al. 2008). Tectonic activity and associated features such as rifts and rupture zones have also been examined through the use of optical sensors (Humphris et al. 2002; Felley et al. 2008; Mosher et al. 2008). Yet another submerged feature which benefits from optical imaging is deep water reefs and coral mounds which have been mapped and identified through the use of this technology (Fosså et al. 2005; Huvenne et al. 2005; Reed et al. 2005; Grasmueck et al. 2006). The available resolution in underwater optical images is apparent when the large body of work in species identification along with characterization and mapping of habitats is considered. Species identification by camera for the purposes of population and/or distribution studies are numerous  5 (Johnson et al. 2003; Lauth et al. 2004a; Lauth et al. 2004b; Trenkel et al. 2004; Reed et al. 2005; Stein et al. 2005; Fonseca et al. 2008; Rossi et al. 2008). Similar to the work with species identification, habitat and benthic structure surveys require high resolution for the identification and classification of fine-scale benthic/ submerged features (Auster et al. 1997; Harrold et al. 1998; Cailliet et al. 1999; Kostylev et al. 2001; Parry et al. 2002; Parry et al. 2003; Singh et al. 2004; Rosenkranz and Byersdorfer 2004; Spencer et al. 2005; Ambrose et al. 2005; Schleyer et al. 2006; Lirman et al. 2007; Jones et al. 2007; Wilson et al. 2009). Acoustic imagery rather than optical imagery is more commonly applied to underwater benthic mapping applications due to the ability to collect large swaths of data from elevations of 10’s to 1000’s of meters above the benthic surface. This method is suitable for mapping large areas and is considerably less demanding on platform performance relative to the requirements of mapping with optical imagery. Optical imagery requires both precision bottom following and platform stability within 10 meters of the bottom. Unfortunately, current acoustic technology lacks the resolution sufficient to describe all features significant in benthic habitats (Diaz et al. 2004; Andrews 2003). For this reason acoustic surveys are often accompanied by optical imagery to ground truth the data sets and describe more subtle features (Kostylev et al.2001; Dupre et al. 2008; Grasmueck et al. 2006; Sanchez et al. 2009). These ground truthing images are commonly gathered through the use of a drop camera or ROV. From these images, generalizations are often applied to the entire sonar data set.  Studies utilizing methods such as these are able to map large areas, but must extrapolate from the small amount of optical imagery collected if they wish to identify habitat zones through out the acoustic survey area  6 1.2. Platforms for collecting benthic images To satisfy the requirements of each individual underwater imaging application there are several technologies, which may be utilized by researchers in data collection methods. These technologies and methods must consider what type of imagery is to be collected whether it be acoustic or optical or both, and a means by which to deploy the respective instrumentation. This necessitates the use of an underwater imaging platform. Platforms by which these instruments are deployed include surface vessels, underwater towed platforms, remotely operated vehicles (ROVs), manned submersibles, and autonomous underwater vehicles (AUVs). The technological development of imaging platforms has advanced a number of different configurations and are discussed below. To capture photographs of benthic habitats there are a number of challenges that must be overcome in the design of imaging platforms. These challenges include those associated with the optical imaging system and those associated with the platform itself. The optical imaging system must perform in an environment where light is attenuated and scattered requiring a close proximity to target surfaces to maintain image quality. The imaging platform itself must be robust and pressure resistant, support photographic equipment and light sources, capture and store/transfer images linked to accurate positional data, and maintain consistent set altitudes. A number of underwater platforms have been developed to meet these challenges and will be reviewed in the following sections. These include: towed platforms, remotely operated vehicles (ROVs), manned submersibles, and autonomous underwater vehicles (AUVs) (see Figure 1). Inherent in the physical design of each platform are strengths and limitations in their respective ability to successfully mediate the challenges introduced by underwater photographic surveys. These tradeoffs, along with platform specific operational parameters influence their individual  7 effectiveness at collecting benthic photographs. These individual details are presented below for each type of imaging platform.  FIGURE 1 IMAGING PLATFORMS USED IN BENTHIC SURVEYS In this figure four common survey tools are shown, each can be used to gather underwater photographic data. A major difference amongst underwater platforms is the use of, or lack of a tether. Towed platforms and ROVs are tethered and thus connected to a surface vehicle; manned submersibles and AUVs are not (although some hybrids exist).  The different design of each of these platforms affects their overall efficiency as near bottom photographic surveying tools. 1.2.1. Towed platforms Towed camera systems may be deployed in many configurations though their basic construction remains essentially similar. The towed camera system is composed of a frame with a mounted camera and light source, attached via cable with or without data transfer capability to a support  8 vessel (Figure 1). There are two types of towed camera platforms; off-bottom and on-bottom. Off-bottom platforms are towed above the bottom and are dependent upon the support vessel for altitude control. On-bottom platforms are equipped with skids, which remain in contact with the benthic surface as they slide along the sea floor. To present the wide range of the towed platforms in use today, Table 1 provides a list of some characteristics of these vessels used in research applications. The towed camera system is the most widely used of the imaging platforms. This wide spread usage can primarily be attributed to their relative simplicity, as well as low cost of operation and construction making these platforms an attractive option for researchers. Some disadvantages on the other hand include difficulty accurately controlling trajectories, positional errors, and unstable vehicle dynamics. Due to these limitations towed systems are not suitable for close inspection (Newman et al. 2008) and are unable to effectively adapt to sharply variable bathymetric contours. One of the main components affecting much of the towed platforms performance is its tethered connection. The towed system is tethered and thus operation is dependent on its surface connection with a ship. This has both disadvantages and advantages. Dependence upon surface vessels subject towed platforms to surface water conditions and renders them unusable in ice covered seas or lakes where surface vessels are unable to navigate. Conversely, towed platforms connected to research vessels with cables capable of data and power transfer gain certain advantages. These systems gain power and data storage capacity limited  9 TABLE 1 PERFORMANCE CAPABILITIES AND OPERATIONAL PARAMETERS OF TOWED PLATFORMS System parameters mark the capabilities of the platform whereas operational parameters are the specifics under which the respective authors utilized the platform. (-) denotes unavailable data   SYSTEM PARAMETERS   OPERATIONAL PARAMETERS Author Type Weight in air (kg) Camera angle  Tow speed (m/s) Camera Altitude (m) Depth (m) Transect length (km)  Piepenburg and Schmid 1997 Off-bottom - 0°    1.5 14-45 0.03-0.20 Jones et al. 2007 Off-bottom  0°   0.25 3.0-6.0 1006-1660 - Grizzle et al. 2008 Off-bottom - -   0.77 - - - Fonesca et al. 2008 Off-bottom - 45°   1.54 1.0 420 - Sumida et al. 2008 Off-bottom - 0°   0.26-0.51 2.0 526-641 - Sanchez et al.  2009 Off-bottom 350 0°   0.26-0.77 2.0-6.0 140-570 0.72-4.82 Cranmer et al. 2003 On-bottom - 22.5°   0.77 0.82 150-160 1.50-2.50 Rosenkranz and Byersdorfer 2004 On-bottom - 3°   0.78 1.15 - 0.67 Lauth et al. 2004 On-bottom 500 22°   .75-1 1.0 450-1150 - Spencer et al. 2005 On-bottom 19 35°   0.6 0.42 3.0-13 0.03-0.20 Rooper et al. 2007 On-bottom - 35°   0.5-0.75 <2.0 110-225 - Rosenkranz et al. 2008 On-bottom 500 -   1.39-2.36 1.15 40-140 550  10 only to that of the shipboard facilities. This increases duration as opposed to those systems which are dependent upon self-contained data storage and power supply. For example, a towed system connected via power and data transfer cables to the surface vessel performed transects of over 500 km (Rosenkranz et al. 2008; see also Table 1). With the aid of a tether, towed platforms are able to collect extensive imagery. The quality of this imagery however depends upon a number of other parameters. The important parameters of the towed system that determine image quality are those associated with the configuration of the camera system, and platform stability. The camera system of towed platforms consists of a video camera or a still camera or both. These cameras are often high resolution, and have a range of orientations. Typically on-bottom platforms have a camera altitude of 0.5-2 m (Table 1). Off- bottom cameras have more flexibility as they are controlled by the ship cable length rather than the structure of the platform. Camera angles vary from 0 o  to 45 o  facing forward over the bottom from a vertical axis (Table 1). Camera orientations other than 0 o  (or perpendicular to the benthic surface) result in each image being associated with a certain distortion factor. If the images are to be used to physically measure benthic features, all the images must be rectified to represent the true undistorted benthic surface. Beyond distortion caused by camera orientation, there are physical aspects of the platform that may also affect image quality. Tow lines of on-bottom sleds may contact the sediment ahead of the sled, causing sediment re-suspension. This turbidity results in images becoming unusable as the benthic surface is obscured (Rosenkranz and Byersdorf 2004). Another problem documented by the same authors is that rough sea conditions continually lift the sled off the bottom, also resulting in unusable data. These same conditions also affect the operation of off-bottom platforms. The actions of swells oscillate the platform vertically making consistent altitude  11 control difficult. Sanchez et al. (2009) find their off-bottom towed platform un-operational when waters are not calm. In addition, towed platforms can also be adversely affected by the texture of the benthic surface. Where the bottom is rough or rocky, on-bottom platforms may not be operational, as they require consistent low grain size sediment to limit friction, vibration, and/or the risk of becoming ensnared. In high relief bathymetry off-bottom platforms will experience difficulty in maintaining altitude control. Yet another complication is the instability of orientation in towed vehicles which is particularly applicable to off-bottom systems. Instabilities in the platform include yaw, roll and pitch. The resultant deviation in camera angles relative to the bottom varies amongst images causing them to be a distorted representation of the benthic surface. If a three axis compass is installed, or a measured laser pattern is projected on the benthic surface, the known camera orientation and pattern dimension may be used to normalize images in post processing. Each of these factors may affect image quality and consequently make interpretation of the benthic image data difficult or associated with a measure of error. The collecting of this usable image data is only one part of a successful survey, which must also include the accurate geo-referencing of those images. Accurate geo-referencing of image data is essential for creating meaningful surveys. The quality of geo-referencing depends upon the level of accuracy by which the platform’s position can be determined. The platform’s position is described by both a horizontal and a vertical component. Horizontal positioning data for towed systems is acquired by linking shipboard GPS locations to image frames. The actual location of the towed platform is estimated using the length and angle of the cable from the research vessel to platform. The GPS location of the vessel is then modified by an offset corresponding to the calculated location of the platform. Vertical depth and altitude positioning varies amongst systems. For on-bottom platforms, altitude is determined by the platform structure, although this remains constant only when the sled is in contact with the sea  12 bottom and its sink depth into the seabed does not vary. Off-bottom platforms on the other hand are dependent upon shipboard altitude control, which is usually obtained through estimating the position of the platform over known bathymetry, live image feed back from the platform, or a platform mounted altimeter. The issues involved with vertical positional inaccuracies can be addressed by utilizing altimeters and other instruments to track the bottom and vehicle position. However, horizontal position accuracy for towed platforms is not well constrained with an error range of 5 (Rosenkrantz and Byersdorfer 2004) to 8m (Rooper et al. 2007). These estimates may vary in dependability when considering that Grasmueck et al. (2006) state a positional accuracy of ±50 m for a drop camera tracked with a combination of an ultra short baseline acoustic tracking system and an acoustic pinger. Greater depths of operation will be affected by greater positional inaccuracies, linked to the difficulties of estimating the behavior of long lengths of cable. A final consideration for towed platforms is the effect such systems have on the environment they observe. On-bottom sleds may weigh up to 500 kg (Rosenkranz et al. 2008; Lauth et al. 2004). When these are dragged across the bottom they will disturb sediment and benthic fauna. Such methods may not be suitable for more delicate benthic regions. Furthermore, the sensitivity and flight response of some fauna to the approach of these platforms may bias density surveys. In summary, the towed platform is the simplest of the survey platforms and one widely used. Its advantages mainly stem from its simplicity in deployment and data collection, thus requiring little crew or deployment infrastructure. However, there are aspects related to the tether and its connection to a surface ship that may impede the collection of high quality images in many situations or render transect image collection impossible, such as in ice covered seas or lakes. Based on the large body of work using towed cameras, the contribution of this survey tool to  13 benthic studies is significant (Table 1). To improve the control of the imaging platform and reduce the platforms dynamic dependency on a tether, thrusters may be introduced to the system. 1.2.2. Remotely operated vehicles To endow a camera platform with more control, maneuverability and stability thrusters can be attached. ROVs comprise such a system when connected to the surface through a tether (Figure 1). Thrusters operate individually and are usually placed in an orientation that allows unrestricted movement in three dimensional spaces. Human pilots in the surface support vessel can maneuver the ROV through live video feedback piloting it towards desired targets. ROVs are used in both small area and large area exploration and surveying. Typical small area applications involve collecting point photographs to compliment other sets of data, (Tappin at al. 2001; Ward and Ballard 2004; Delaporta et al. 2006; Mosher et al. 2008) while large area applications include surveying and mapping (Harrold et al. 1998; Trenkel et al. 2004; Stein et al. 2005; Huvenne et al. 2005; Rossi et al. 2008).The camera system on ROVs, similar to towed platforms, is often oriented at an angle or maneuverable to a number of orientations. If an accurate representation of the benthic surface is required, as is necessary for mapping, some form of rectification must be performed on all imagery. Some characteristics of ROV systems in use for photographic surveys are provided in Table 2. ROVs like towed systems are tethered vehicles, and are dependent upon a support vessel for real time navigation and power. Thus they are subject to many of the same advantages and disadvantages associated with tethers. The tether provides power and data transfer capability not limiting vehicle energy and data capacity to that of onboard storage. Also, real-time feedback allows for operators to selectively choose and explore targets of interest. On the other hand, the tether detrimentally affects the vehicle dynamics and image quality. For example, Stein et al  14 (2005) captured a total of 540 minutes of ROV video data of which only 60 minutes satisfied a series of requirements to ensure high-quality images. In this instance the ROV tether was causing extensive sediment re-suspension as it contacted the ground behind the vehicle. In periods where ROV velocity was reduced, this sediment cloud would overtake the platform rendering video unusable. Additionally, difficulties were encountered maintaining constant altitudes which further affected the consistency of images. Another accurate representation of methodology is given by Perry et al (2003), wherein photographic data was removed for two reasons; firstly when position error exceeds 5m and secondly when thruster wash destabilized sediment, consequently obscuring the seabed.  15 TABLE 2 PERFORMANCE CAPABILITIES AND OPERATIONAL PARAMETERS OF ROVS System parameters mark the capabilities of the platform whereas operational parameters are the specifics under which the respective authors utilized the platform. (-) denotes unavailable data   SYSTEM PARAMETERS   OPERATIONAL PARAMETERS Author Vehicle Manufacturer Depth max (m) Weight (kg)  Speed (m/s) Camera Altitude (m) Transect length (m)  Melchert et al. 2008 Quest 5 Schilling Robotics 4000 3500    -  1.0  - Rossi et al. 2008 Sprint 103  -   -   -    0.18 0.3-0.5 100-1000 Mosher et al. 2008 Magellan 825 Oceaneering 7000  -     -  -  - Huvenne et al. 2005 VICTOR 6000 IFREMER 6000 4600   0.7 - 3000- 14000 Trenkel et al. 2004  VICTOR 6000 IFREMER 6000 4600   0.25 0.8 300-24000 Johnson et al. 2003  -   -  90 45   1.3 - 1000 Parry et al. 2003 Phantom XTL Deep Ocean Engineering 230 45   - - 3 Tappin et al. 2001 Dolphin 3 k JAMSTEC 3300 3700   0.8-1.5 3.0-6.0 7000 www.Oceaneering.com Hydra minimum Oceaneering 3000 250    -  2.0  - www.Jamstec.go.jp KAIKA 7000 JAMSTEC 7000 4000    -  2.0-6.0  - www.whoi.edu Nereus Woods Hole Oceanographic institution 10 000+ 2800   1.5 - -   16 To mitigate the negative impacts of the tether on vehicle dynamics, the Institute for Exploration has developed a hybrid system in which a ROV is attached below a towed optical platform. The towed camera system acts as a buffer between the ROV and the support vessel reducing, to a certain extent, negative tether impacts of tether drag and support vessel motion (Ward and Ballard 2004). In summary, ROVs are highly maneuverable platforms, ideal for exploration of targeted sites. Their maneuverability allows vertical surveys (Johnson et al. 2003; Rossi et al. 2008), a task which would be of great difficulty using towed platforms or AUVs. Furthermore live manipulation allows for physical benthic sampling another unique capability shared with manned submersibles (Harrold et al. 1998; Huvenne et al 2005; Melchert et al. 2008; Mosher et al. 2008). Unfortunately, these systems remain tethered and are subject to the limitations thereof. 1.2.3. Manned submersibles In manned submersibles the human pilot is removed from the surface and put into the platform itself (Figure 1). The manned submersible approach to underwater exploration has three major consequences. Firstly, the requirement for a tether is effectively eliminated. Secondly, the platforms must be large enough to include appropriate space for equipment and operators. Thirdly, when un-tethered, mission durations are limited to finite onboard energy storage and life support limitations. Gaining independence from a surface tether manned submersibles are not hindered by tether induced bottom disturbances, restrictive vehicle control, and surface conditions. Without these challenges, bottom-following ability is limited only to thruster dynamics and pilot control. Consistent near bottom navigation is achievable, Love et al (2009) followed transects at 1 m altitude while maintaining speeds between 0.26 and 0.51 m/s. By removing the tether, manned submersibles gain these advantages in independence and mobility  17 but must also overcome challenges associated with maximizing a limited supply of onboard power which must support platform systems including thrusters, instruments and life-support. Life support systems and capacity for passengers result in these platform’s large size and complexity requiring significant support vessel infrastructure, technical support crew and cost. Due to these factors, manned submersibles are generally only used in large well-funded projects restricting their usage in the broader scientific community. The application of manned submersibles includes geophysical exploration and sampling (Barrie et al. 1992; Mauffret et al. 2001) as well as species and habitat surveys (Juniper et al. 1992; Vinogradov 2005; Love and Yoklavich 2008; Love et al. 2009). Each of these applications requires specific suite of submersible performance parameters. The manned submersibles used in these applications are numerous, making it difficult to generalize their description. Table 3, provides technical specifications on a number of manned research submersibles, including important performance parameters such as depth capability and operational duration. Maximum depth will define what benthic habitats are accessible to a specific vehicle, whereas duration will determine the area which can be surveyed in a single dive. The depths accessible by manned submersibles extend down to 6 500 m (Table 3). This is considerably less than current ROV depth range, which just recently extended down past 10 000 m (though, it can be noted that the 10 000 m capable Nerus ROV can also function as an AUV). The 6 500 m depth capability provides access to 99% of the world’s seafloor. It is interesting to consider that manned submersibles have in the past reached the 10, 000 m depth mark, even as early as 1951 during Jacques Piccard and Don Walsh’s epic decent into the Marianas Trench in the Trieste II. This feat has never been repeated, and there are no manned submersibles capable of this today. The motivation for driving manned submersible to these depths are the unmatched observational advantageous of human presence.  18 The opportunity for real-time, in-person exploration is the greatest advantage of the manned submersibles. On-board observers have the least restricted visual range of all platforms allowing for the most efficient exploration. The advantages of the visual range available in manned submersibles is demonstrated in a study by Cailliet et al (1999) When comparing transects in the same region between recorded video data and manned submersible observations, the greatest amount of species were identified by the latter method. Although these observations are an advantage for exploration and identification, Juniper et al (1992) state that the hull mounted- camera remained the main source for mapping data as this data is more easily manipulated and quantified. Video transects obtained from manned submersibles commonly do not follow the more traditional straight line transect methods of the systematic approach to photographic surveys (Juniper et al. 1992, Barrie et al. 1992). Often manned submersible transects are more organic in that they follow features or simply explore at whim.  19 TABLE 3 PERFROMANCE CAPABILITIES AND OPERATIONAL PARAMETERS OF MANNED SUBMERSIBLES System parameters mark the capabilities of the platform whereas operational parameters are the specifics under which the respective authors utilized the platform. (-) denotes unavailable data   SYSTEM PARAMETERS   OPERATIONAL PARAMETERS Author Vehicle Manifacturer Depth max (m) Weight in air (kg)  Speed (m/s) Operational time (h) Cruise range (km)  www.whoi.edu Alvin Woods Hole Oceanographic Institute 4 500 17 000   1 4-10 5 www.seamagine.com Triumph 3 SEAmagine 914 6 800    -  6.0  - www.deltaoceanographics.com Delta Delta Oceanographics 365 2 222   1.8  -   - www.npolar.no/geonet JAGO Max Plank Institute 400 3 000   0.5  -   - www.fau.edu Johnson-Sea- Link Harbor Branch 914 12 727   0.5  -   - www.deepoceanexpeditions.com MIR Rauma-Repola 6 000 18 600   2.5 -  - www.ifremer.fr Nautile IFREMER 6 000 19  500   0.9 5 7.5 www.jamstec.go.jp Shinkai 6500 JAMSTEC 6 500 26700   1.3 8.0  - www.soest.hawaii.edu Pisces V International Hydrodynamics 1 900 13 000   1 7-10  - www.nuytco.com DeepWorker 2000 Nuytco Research 600 1 700  1.5 6-8  -  20 The systematic transect surveys performed by towed sleds and ROVs accomplish transects over 500 km long (Table 1 and Table 2). Similar, large area mapping projects are limited in manned submersibles due to their limited operational time typically under 10 hours (Table 3). For example Love and Yoklavich (2008) and Love et al (2009) record transects with a manned submersibles at speeds of 0.26-0.56 m/s, speeds at which observers are able to identify small features. In even the longest duration submersibles which are operational for 10 hours, these speeds would not enable long distance ranges.   Table 3 lists 2 quoted manned submersible ranges of 5 – 7.5 km. The range of the manned submersible must also take into account the time required for controlled accent and descent stages which reduce the amount of available bottom time from the 10 hour maximum operational time. These range limitations considerably increases the amount of time required to survey a given area. Additionally, before and after each deployment of a manned submersible, significant maintenance must be performed to assure its integrity and support for human passengers. This results in a higher cost, time, and support investment to survey with manned submersibles than the other platforms discussed so far. In summary, the manned submersible remains perhaps the best exploratory tool for small areas at depths down to 6, 500 m, as the real-time, in-person observations allow for efficient exploration and identification opportunities. Additionally, challenging terrain can be overcome by pilots allowing access to steep and vertical bathymetry. Although these properties benefit benthic exploration, duration limits and dependence upon human piloting reduce its potential use as a large scale imagery surveying and mapping tool. Realistically, surveys on the scale of 100’s of kilometers are not within the range of current manned research submersibles. This would require both extended capacity of onboard resources and pilots able to withstand the fatigue associated with tracking the bottom with cm scale accuracy for consistent planar imagery. An efficient system for extensive benthic mapping would have no tether, no pilot fatigue factor, and extended  21 range. Such a system drawing on the strengths of many of the other platforms is the autonomous underwater vehicle (AUV). The advantages and limitations of this platform are discussed next. 1.2.4. Autonomous underwater vehicles Autonomous underwater vehicles (AUV) are un-tethered, un-manned robotic platforms capable of independent navigation. AUVs use sophisticated software managing sensory feedback to navigate and carry out mission functions, pre-programmed by controllers. The level of control achievable by software allows these platforms to follow three-dimensional trajectories at a precision not available from human controlled platforms. In addition, this approach offers a unique advantage in that missions can be performed autonomously, greatly reducing necessary surface support. However, lacking a tether, AUVs similar to manned submersibles have finite onboard storage. Though, unlike manned submersibles, AUV systems and instruments are typically much less energy demanding and longer mission duration is achievable (Table 3 and Table 4). Also unlike the previously discussed platforms, there is no live feed back. Thus before deployment, careful consideration must be made to the mission parameters. AUVs as benthic survey tools, are the newest of the discussed technologies and have not been utilized to the extent of the other platforms. In a report outlining underwater photographic survey methods for NOAA (Waddington and Hart 2003), AUVs don’t so much as garner a mention amongst the other imaging platforms.  This leaves open an attractive opportunity to utilize these platforms for imaging surveys. The precise navigational ability of AUVs is based on their ability to record and utilize both horizontal and vertical positional data. Horizontal positioning and navigation depend upon a suite of instruments including 3-axis compasses, inclinometers, accelerometers, Doppler Velocity Logs (DVL), and inertial navigation systems. It is through the use of these sensors that  22 AUVs can follow pre-designated trajectories with precision. Typically these trajectories are associated with a horizontal position error of less than 2% down to 0.1% of distance travelled. Additionally, an acoustic positioning system may be put in place, to provide positional data feedback directly to and from the AUV. Vertical positioning is aided by altimeters and pressure sensors. These instruments provide precise depth and altitude positions. Using these instruments, AUVs can compensate for bathymetric contours and fly at the fixed altitudes essential for benthic photography. Each image is automatically associated with position and time and any other variable collected, this serves to limit potential errors in post processing, or in inferring position through other means.  The configurations and operational characteristics of AUVs are numerous (Table 4).  The physical design of AUV’s also varies widely from gliders, to those built to resemble and mimic animal physiology. The most common design however, is the torpedo shaped platform (Figure 1). In the torpedo configuration, propulsion is generated by a propeller located at the stern. Control surfaces are located at the stern or distributed along the hull. This is an efficient design in that it is fast and uses less energy due to relatively small drag, allowing larger distances to be covered. Conversely, from an imaging perspective, this design is less stable than some others and maneuverability is limited. Furthermore, to retain sufficient steerage, minimum speeds must be maintained to generate the needed forces over the relatively small control surfaces. Therefore, imaging with such a platform requires operators to consider mission goals and operational constraints in order to collect the highest quality images for given bathymetric conditions. To mitigate some of these challenges two platforms based on multiple connected torpedo-shaped hulls were designed particularly for benthic imaging. These AUVs are the Autonomous Underwater Explorer (ABE) and SeaBED, both of which were designed by engineers at the Woods Hole Oceanographic Institute. These two AUVs have performed a wide array of high  23 quality photographic survey missions (Clarke et al. 2009; German et al. 2008; Newman et al. 2008; Yoerger et al. 2007; Armstrong et al. 2006; Singh et al. 2004). The effectiveness of the multiple hulled design is illustrated by the fact that the Seabed AUV can maintain set bottom following altitudes to within a few centimeters even on slopes as steep as 75°, while flying at speeds under 1 m/s (Singh et al. 2004). In summary, computer controlled navigation allows AUVs to follow precise survey transects pre-designed to meet survey objectives. The images collected are automatically referenced to position and time, along with any other environmental variables measured. Support necessary for AUV operation does not include lengthy cables and resultant infrastructure, as are required with towed platforms and ROVs..  24 TABLE 4 PERFORMANCE CAPABILITIES AND OPERATIONAL PARAMETERS OF AUVS System parameters mark the capabilities of the platform whereas operational parameters are the specifics under which the respective authors utilized the platform. (-) denotes unavailable data. Additional references, 1 .(http://auvac.org/resources/browse/configuration)   SYSTEM PARAMETERS   OPERATIONAL PARAMETERS Author Vehicle Manufacturer Depth max (m) Weight in air (kg)  Speed (m/s) Operational time (h) Cruise range (km)  McPhail 2007 Autosub6000 National Oceanography center South Hampton 6000 2000   2.0 256 1000 1 see caption  Theseus International Submarine Engineering 2000 8600   2.0  -  1360 1  Explorer International Submarine Engineering 5000 1250   2.5  -  360 1  Seahorse Pennsylvania State University Applied Research Lab 1000 4400   2.0 125 926 1  Remus 6000 Hydroid 6000 862   2.3 22  - 1  Hugin 3000 Konsberg Maritime 3000 1400   2.0 50  - 1  Gavia Hafmynd 1000 49   3.0 6 25 Singh et al. 2004 Seabed Woods Hole Oceanographic Institute 2000 200   1.2 80  - Yoerger et al. 2007 ABE Woods Hole Oceanographic Institute 6000 450   1.0 50 30  25 Also, many AUVs may be deployed by hand, though some of the larger platforms require larger support vessels with cranes similar to manned submersibles and work class ROVs. Once the AUV is deployed, operators are free to carry out other activities, while the platform completes a given mission. The limitations of AUVs include operational duration caps due both to battery capacity and data storage capacity. However, the longest duration AUVs may perform transects lasting over a week, traversing over a thousand kilometers. The lack of live feedback from AUVs requires that missions be initiated with care and that data collections are confirmed after each mission to make sure objectives are met. The current study utilizes the advantageous features of AUVs to perform repetitive tasks with precision, in order to develop methods for the extensive benthic photographic mapping of a lacustrine basin. Regardless of the platform chosen for benthic surveys collecting photographs in-water is a daunting task. The next section provides an overview of the major challenges. 1.3. Collecting in-water optical imagery Whatever the particular platform for collecting optical images, such surveys are accompanied by a suite of challenges which must be overcome to collect usable benthic images. Many of these have already been alluded to. In their pioneering report Ewing, Vine, and Worzel (1946) summed up the difficulties of obtaining pictures from the ocean bottom. “The problems are to find an interesting subject, and to put the camera in focus with it, to provide proper illumination, to hold the camera reasonably steady while the exposure is made, and to get the camera back afterwards.” Additionally, Vine (1975) emphasized the need for accurate positional data. These authors were working with drop cameras and many of the problems remain the same today, though a more modern perspective may restate these. With modern technology, the challenges of retrieving the picture platform are becoming much more refined and associated with less risk. Yet positional data quality remains a concern as does imaging platform stability. Essentially the  26 modern challenges are to image in a harsh, remote environment, while recording precise positional data, through a medium much denser than air, with a limited sensor footprint. The extent of these challenges is highlighted by the relative scarcity of large area photo transects even after more than 100 years of possessing the technological ability. Beyond what may be considered as the mechanical aspects of an underwater imaging system, there are properties of the dense water medium itself which poses severe limitations on underwater photography. These include loss of contrast and strong, non-linear attenuation of the light signal. Being approximately 800 times denser than air, water greatly influences the transmission of electromagnetic energy. Whereas a camera in air or outer space is able to photograph land features through the entire distance of our atmosphere some hundreds of kilometers above earth, even the clearest, distilled water limits photography to distances barely exceeding 10 meters. In coastal waters and lakes the distance is more realistically significantly less than 10 meters. The actual photographic distance in the wide spectrum of natural waters varies greatly. The details of light’s interaction with water is a major concern for underwater photography and will be dealt with in the next section. 1.4. Light in water When light travels through water, it is attenuated. In the clearest waters of the open ocean, light itself may penetrate down to depths of over 700 meters, a surprising observation made from manned submersibles (Mertens 1970). Conversely, to capture an identifiable image with a camera, the camera to subject distance generally can not exceed 10 meters at maximum. More commonly, especially in inland and coastal water, loss of light intensity and image quality is significant over distances as small as one meter. Not only is total light intensity attenuated but specific wavelengths of light are attenuated by varying degrees. To understand how this will  27 affect photographic systems, the extent of attenuation in water can be quantified. This can be measured as the loss of initial irradiancies over a distance or more specifically by determining the loss in irradiance from a collimated beam of single wavelength over a given distance (Duntley 1963; Mertens 1970). Each wavelength has a specific attenuation. Attenuation from Wetzel (2001) is,  I z = I o e −ηz ,     (1.1) where η is the extinction coefficient (1/m), Io is the surface irradiation (W m -2 ) and Iz is the irradiation at depth z. Examples of extinction coefficients for pure and natural waters have can be found in a number of works (Hulburt 1945, Wetzel 2001). Figure 2 plots the extinction coefficients for wavelengths in distilled water. Another, perhaps more intuitive, interpretation of the attenuation of light through water is the percent of light transmitted per meter (Figure 3). To further describe light propagation underwater total attenuation can be described by two individual physical processes. The total attenuation of light in water is the sum of both the absorption (β), and scattering (σ) of photons as, λλλ σβη += ,     (1.2) where λ represents a specific wavelength and ηλ is the respective extinction coefficient.  28 450 500 550 600 650 700 750 800 0 0.5 1 1.5 2 2.5 Wavelength λ (nm) E x ti n c ti o n  c o e ff ic ie n t ηη ηη  ( 1 /m )    FIGURE 2 EXTINCTION COEFFICIENTS FOR PURE WATER A measure of attenuation in pure water is shown across the visual range of the electromagnetic spectrum. Data from Tam and Patel (1979) and James and Birges (1938) cited in Wetzel (2001). 450 500 550 600 650 700 750 800 0 10 20 30 40 50 60 70 80 90 100 Wavelength λ (nm) T ra n s m it ta n c e  (%  •• ••  1 /m )    FIGURE 3 TRANSMITTANCE OF LIGHT THROUGH PURE WATER. The transmission of light in pure water is shown here across the visual range of the electromagnetic spectrum. Data from Jerlov (1968) cited in McFarland (1986).  29 The first of the two processes, absorption, is due to the transfer of photon energy to heat in a process that is thermodynamically irreversible. Photons are absorbed when they encounter molecules or atoms which resonate at frequencies corresponding to the photon energy level. The absorbed photon imparts a higher unstable energy configuration upon the atom or molecule. The unstable energy configuration is returned to a stable one by the release of heat energy equivalent to the energy of the photon. Through this mechanism, photons are essentially removed and replaced with heat. It should be noted that not all such in-water photon interactions result in the straightforward emission of heat. Another mechanism by which photons are effectively removed from underwater environment is photosynthesis. With the addition of photosynthetically active flora, photons are absorbed and converted to chemical potential energy rather than heat. The electron transfer process in photosynthesis also produces a small amount of heat, though not to the extent of those processes outside the photosynthetic system. These are the mechanisms by which photons are absorbed in water and made unavailable to underwater photographic sensors. In the processes of scattering, as opposed to absorption, photons are not “lost” or physically changed. Instead, they are redirected from their original vector. The details of this process are considerably more involved than those of photon absorption and will be only briefly reviewed here. Scattering of photons in water occurs by three methods. (1) Non-absorbing particles smaller than a photon’s wavelength (Rayleigh scattering). These particles consist of the water molecules themselves and other impurities. (2) Considerably larger particles (Mie scattering). This may consist of complex dissolved molecules, organisms and other suspensoids larger than a photons wavelength. (3) Transecting non-homogeneous index of refractions. This does not require particles to be present but rather that changes in index of refraction exist due to steep gradients of temperature and/or salinity. Each of these processes are detrimental to underwater images and depend largely upon the characteristics of a given water sample. Even measurements  30 of pure distilled water produce some scattering (Mertens 1970). This is due to the water molecules themselves. Although it can be noted that in these samples, scattering accounts for only a few percent of the attenuation caused by absorption. It is the larger suspensoids causing Mie scattering that account for the greater part of scatterings contribution to total light attenuation in natural waters.   Since a particular water volume may contain any combination or amount of particles of various sizes, scattering functions amongst bodies of water vary over a wide range. From an underwater imaging perspective, there are several consequences due to the nature of in- water light attenuation. The non-linear attenuation of light in water causes poor colour composition in underwater photos. The shortest wavelength (blue) is least effected by attenuation through water and thus its relative intensity in underwater images is higher. Conversely, red wavelengths have the highest extinction coefficients and thus their intensity is more heavily attenuated by a volume of water being imaged through (Figure 2). This results in a tendency of underwater images to appear blue or green (which also is less affected by attenuation relative to red). Essentially the white balance, or relative intensities of the different colours represented in an image, is not an accurate representation of the actual colours of the imaged object. To mediate this effect, underwater camera systems are equipped with strobe lights to increase the intensity of all wavelengths of light, helping to recover lost red colour. Additionally, moving the camera and its light source close to the image target reduces the light attenuation distance resulting in less light being lost. Beyond the non-linear attenuation of colour are the effects on image clarity. Scattering is a random process that affects the contrast of underwater images. For a clear photograph to be formed, photons from the imaged object must be transferred to precise corresponding locations  31 on an image plane. When scattering is introduced, photons not originating from the imaged object may fall upon the imaged plane. Therefore, all the photons composing an underwater image may not originate from the imaged object, as the random scattering processes redistribute them. This serves to reduce contrast since all photons from the imaged object are not received due to absorption and scattering and additional ones not corresponding at all to the object are received due to multiple scattering. These are the challenges imposed by the physical propagation of light through water when collecting photography underwater (Figure 4 ). To capture this light, basic equipment is required. A review of the camera system follows in the next section. 1.5. Camera systems The camera system records a light image and is composed of two main parts; the camera sensor and the lens. Generally, in-water camera systems are slightly modified surface type cameras installed in water-tight encasements (Figure 4 ). Each component of the camera system is modifiable by a number of parameters. Dependent on a specific underwater application, these parameters may be optimized for best performance. This is of primary interest to the current work, as it allows usable underwater photographs to be captured. The next section provides a brief review of some of these important aspects of an underwater camera system, beginning with the lens and followed by the camera sensor.  32 Analog to Digital Lens     Camera Sensor ApertureFocal length θ Angular field of view Sensor plane Absorbtion Light Source Reflected light Camera System FIGURE 4 A DIGITAL UNDERWATER CAMERA SYSTEM. This figure illustrates the basic components of an underwater camera system along with the detrimental aspects of in-water light propagation, principally as concerns the transfer of light from the imaged object to the sensor plane.  33 1.5.1. Lens The camera lens focuses light onto the camera sensor. There are a number of lens parameters that modify light reaching the camera sensor and thus affect the quality of underwater images. The important parameters for this discussion are magnification, focal length, aperture, and angle of view.  Magnification: Magnification is the ratio of the size of an object being imaged and the corresponding size of the object image formed by the camera system.  Focal length: Indicates the distance required for collimated rays passing through a lens to converge to a single point (for convergent lenses Figure 4 ). Essentially, this is a measure of how strongly a lens bends incoming light. The focal length number is simply this distance in mm. Large focal lengths are associated with high image magnification whereas small focal lengths provide a wide angle of view.  Aperture: The aperture is an opening in an opaque material. The purpose of the aperture is to control the amount of light reaching the camera sensor by adjusting the size of this opening (Figure 4 ). The area of this opening determines the amount of incoming light energy which will be distributed over the image plane. Commonly, the aperture is constructed of a series of overlapping leaves which may be expanded and contracted to adjust opening area while maintaining a relatively circular geometry. The size of the opening and thus the amount of light available to the camera sensor is defined as the ratio of the focal length to the aperture diameter called the f-number (f #).  34 Another important aspect of aperture is its effect on depth of field. Depth of field is the range of distances perpendicular to the in-focus plane in which an imaged object has an acceptable sharpness. The depth of field is inversely related to aperture size. For AUV operations with a fixed focal length, large depth of field is desirable to ensure sharp images even as AUV altitude varies while attempting to follow complex bathymetry. However, in low light conditions a large aperture is important to maximize image contrast. Therefore, the inverse relation between aperture and depth of field necessitates a trade off between image focus and contrast that must be optimized. Angle of view: The angle of view is an angle that describes the amount of a certain scene that is imaged (Figure 4 ). A greater angle will result in more of the imaged scene being captured as opposed to a smaller angle or narrow angle of view. This is related to the focal length, as it controls the extent to which rays are bent. It is also related to the image sensor, as its dimensions are analogous to a projector screen. The larger the sensor, the greater area of the scene it can capture. To measure the angle of view, a length dimension of the sensor must be chosen (horizontal, vertical or diagonal) for which the angle of view corresponds. The angle of view is then,       • •= f CCDx 2 arctan2α ,     (1.3) where xCCD  is a dimension of the CCD sensor and f  is the focal length.  35 1.5.2. Camera sensor The camera sensor records an optical image focused by the lens. The function of digital camera sensors is beyond the scope of this application but a rudimentary overview is beneficial to understand the camera settings used in this study. When considering digital cameras, the sensor may be one of two types; CMOS or CCD. In this paper, the only sensor considered is the CCD (charge coupled device) sensor. This sensor is associated with electronic circuitry, which collects the data from the sensor to present it in a way which we recognize as a photograph. A CCD is a device built on a silicon chip, on which there are a number of “electron wells” each representing a pixel, recording the number of incoming photons over some time period. The incoming photons are recorded through the collection of electrons in the electron wells. Electrons or charge is created through the photoelectric effect when photons hit the silicone material of the CCD. The quantum efficiency or percent of photons which generate a usable charge in CCDs is over 90% in the most advanced CCD configurations. This compares to 10% efficiency in film systems. (Janesick 2001). This efficiency makes digital camera an ideal system for imaging in- water which is often a light limited environment. To form an image, the charge collected in each pixel is converted to a voltage. This voltage can additionally be amplified with a gain setting, which will be discussed later. The voltage is then converted from an analogue to a digital signal by an analog to digital converter (A/D). For typical images, each pixel then receives a digital 8-bit value (for 8-bit images) from 1-254. The final digitized matrix comprises the image. Software can then be utilized to further manipulate images. Below is a review of some specific digital camera settings which are relevant to the current work and the image sensor used. More complete details on CCD function can be found in Janesick (2001).  36 To best fulfill the needs of an underwater photographic application, there are a number of properties in the digital camera that may be manipulated. The use of these settings is most effective when mission objectives and operational parameters are kept in mind. The properties of digital cameras can vary greatly between manufactures, thus only those settings important to the current work are detailed below.  Shutter: This setting dictates the length of time that light is integrated or collected by the camera sensor. In a traditional camera this is a mechanical obstruction which opens and closes to allow light to pass. In a CCD camera with an electronic shutter, the sensor will gather light for a designated amount of time before the charges are transferred to a light shielded area of the sensor to be quantified. Controlling the amount of time that light falls on the sensor is one of the most effective ways to control image brightness. The advantage of manipulating this setting (along with aperture), more so than the others, is that no digital processing is involved, potentially reducing effects of digital noise in the resultant image. In other words, an optical optimization is generally more effective than optimizations which require digital processing.  Brightness: This function raises the light level of the entire image. Essentially, the setting controls the amount of black in an image; increasing brightness reduces the black level. For standard CCD systems, the units for this setting are in percent. The percent value represents a percent increase of the analog to digital (A/D) converters minimum digital number. As a result, increasing the brightness setting will cause all aspects of the image to be brighter, including black. This is a useful setting when operating in very low light conditions common in underwater applications. This function should be used with care however, as it does have a tendency to reduce image contrast causing images to appear washed out.  37  Gain: The camera gain setting controls how photon hits are recorded. After electrons are converted to voltages, the A/D can apply a higher amplification to these values as they are converted to digital ones. This increases the camera response to the incident light by giving each voltage or photon additional “weight”. Brightness is a different aspect of this, in that it is a linear increase in a pixels minimum number value, which is applied to the values after the A/D conversion. The gain setting is a useful way to increase camera sensitivity to light in light limited environments such as may be encountered underwater. Each of these cameras sensor parameters are utilized most effectively when used in conjunction with an understanding of platform performance limitations, behavior, light propagation in water and camera system properties. With these considerations in mind methods can be developed to maximize the efficiency and effectiveness of photographic surveys. The next section details the process of performing a benthic survey in a lacustrine basin, by mitigating underwater imaging challenges and optimizing system performance.  38 2. METHODS In this study solutions are developed to extensively map and explore the benthic region of a lake through AUV-based underwater photography. These benthic surveys were completed using UBC-GAVIA, which had not been previously dedicated to extensive photographic surveys in low light environments. For this reason a methodology was developed to first test platform and camera system performance and second to optimize performance within the limitations of the respective systems. Initial testing missions were carried out in Pavilion Lake BC. The major challenges to overcome included effectively deploying the imaging platform in challenging bathymetry, limiting the detrimental effects of underwater light attenuation, and maximizing the performance of the light limited camera system. The initial imagery was of insufficient quality to be of use in the identification of benthic features. These results were similar to those of previous attempts to photograph the deepest benthic surfaces of Pavilion Lake. From these initial deployments, successful photography was made possible by subsequent refinements made to the mission and AUV camera system parameters preceding each mission. Over 51,000 images were collected in Pavilion Lake alone, providing insight into the nature and distribution of identifiable benthic features. The optimization protocols were tested further by conducting photographic surveys at two other sites, Kelly Lake and Lake Tahoe. We begin with a description of the imaging platform used to perform the imaging survey along with details of its camera system.  39 2.1. Imaging platform UBC-Gavia UBC-Gavia is a small, man-portable autonomous underwater vehicle (AUV) manufactured by Hafmynd LTD of Reykjavik, Iceland (Figure 5). The vehicle is depth rated to 500 m with a maximum rated speed of 3 m/s and an operational endurance of 6 hours. The hull structure is torpedo-like in form and composed of a number of interlocking modules. The complete system is composed of 6 modules carrying the following scientific payload: - Conductivity, temperature, and pressure sensor (Seabird SBE49FastCat) - Optical backscatter meter (Wetlabs Eco BB3) - Side scan sonar (Imagenex 220/990 kHz dual-frequency) - Camera (Point Grey Research, scorpion 20S0 sony1/1/8” CCD sensor) -  LED strobe array - Acoustic doppler current profiler (1200 kHz RDI Workhorse Navigator) UBC-Gavia deployments have primarily been in lacustrine environments (Forrest and Laval 2007a, Forrest and Laval 2007b, Forrest et al. 2008). The configuration of the vehicle in freshwater includes a buoyancy module, which along with the above listed scientific payload yields a length of 2.4m, diameter of 0.2 m and a weight of 55 kg. UBC-Gavia has fixed buoyancy and therefore must be externally ballasted with lead weights prior to deployment. The target ballast is for her to be only slightly positively buoyant; this maximizes performance and allows her to rise to the surface upon mission completion or in the event of an unanticipated mission error.  40  FIGURE 5 UBC-GAVIA Gavia is composed of 6 interlocking modules in this figure each of these modules are identified. 1. Nose module; 2 Battery Module; 3 Buoyancy module; 4 ADCP module; 5 Control module; 6 Propulsion module 2.1.1. UBC-Gavia modules Each of the UBC-Gavia’s 6 modules house different instrumentation, which comprises the AUV payload. These instruments have two functions either measuring environmental parameters or collecting data to be integrated into navigational control functions. To integrate these individual modules, system software relies on a software hierarchy of “crew members”, which control and monitor module sensory and navigation functions. The system software polls each module and its respective payload bringing systems online, thereby enabling the controller to review and manipulate payload function. Each module in the UBC-Gavia configuration used during this survey is listed below along with payload and function.  41 1) Nose module: The nose module houses the CCD camera and forward-facing, single beam, collision avoidance sonar. The CCD camera faces downward through a 6 cm diameter glass view port in a planar orientation above the bottom. The glass view port is surrounded and partially protected by a metal tow hook. 2) Battery module: The battery module contains six lithium ion rechargeable battery stacks which provide approximately 6-7 hours of operational time. There is also a group of standard AA batteries which supply emergency power enabling emergency systems to function for approximately 72 hours upon main battery depletion. Attached externally and dorsally to the battery module is a Seabird SBE49FastCat conductivity, temperature and pressure sensor. The FastCat samples at 16 Hz in its autonomous sampling mode and is calibrated over the following ranges -5 - +35 °C, 0 - 9 S/m, and 0 – 600 m. 3) Buoyancy module: This module provides a buoyancy offset necessary for operations in freshwater. The module is removed for salt water deployments. 4) ADCP module: This module contains two, 1200 kHz RDI workhorse navigator Acoustic Doppler Current Profilers (ADCP) one facing downwards and one facing upwards. The ADCP is essential for aiding AUV navigation upon submergence as it also functions as a Doppler Velocity Log (DVL). The DVL records water track velocity, depth, heading, pitch and roll. The downwards facing DVL records bottom track velocity (when in range) and altitude.  The bottom track velocity can be determined at a vehicle altitude between 0.5 and 30 m. 5) Control module: The control module contains the communications tower, sidescan sonar, acoustic transponder, optical backscatter meter, and a strobe light synchronized with the nose module camera. The communications tower projects dorsally above the vehicle and supports a clear plastic housing within which there are LED port, starboard, and stern navigation lights, Wi-Fi antenna, GPS antenna, and an iridium satellite modem antenna. A Wi-Fi  42 connection is the primary data transfer method when in the field and is used to check data collection, mission performance as well as upload/execute new missions. The onboard WAAS/EGNOS GPS provides position fixes and navigation feedback when the vehicle is at the surface. Its position coordinates are also sent though the satellite phone when the vehicle completes a mission or enters an emergency broadcast mode in response to an unexpected mission error or delayed retrieval. The sidescan sonar is manufactured by Imagenex and is operational at one of two frequencies, 220 and 990 kHz. The sonar has a maximum range of 100m per side at 220 kHz. The acoustic transponder can be used for submerged positioning by ranging between two moored LBL (long base line) transponders of known and fixed location. The optical backscatter meter provides measurements at three wavelengths; 470 nm, 530 nm and 660 nm, capable of sampling at a maximum rate of 8 Hz. The optical backscatter records the amount of light returning from a water volume due to illumination by a light source. The amount of light scattered depends on the suspended solids concentration (SSC) and their shape/reflectivity. Finally the strobe is located at the bottom of the control module and faces forwards at an angle towards the nose module to illuminate the camera field of view. 6) Propulsion module: The propulsion module contains the propeller and control planes. The propeller is protected within a circular cowling which supports four control surfaces. The control surfaces are orientated in an X configuration and situated in the propeller outwash. 2.2. Camera system Of all the peripheral payload sensory instruments, the camera system is of the greatest significance to the current study. The camera is enclosed in the nose cone and is comprised of two major components, the sensor and the lens. The camera is a Scorpion 20SO model, from Point Grey Research using a Sony 1/1.8” ICX274 Charge Coupled Device (CCD) sensor (Figure  43 6). This camera has gain capabilities from -10-25dB, shutter speed range from .03 ms to 3296 ms, a signal to noise ratio of 57dB, and a rated power consumption of 3.5 W. The manufacturer of the camera quotes a resolution of 1628x1236 in colour or black and white. The integrated camera system on UBC-Gavia records images of 800x600 pixels due to constraints associated with the onboard processor and the set imaging mode. The strobe flash is synchronized with the camera and situated on the control module aft of the nose cone. This separation is an important configuration as it reduces the light reflected from particulates in the water column back to the camera. Reflected light or backscatter produces white spots or over-saturated pixels in the recorded images. By separating the light strobe some distance and orientating it at an angle the light from the strobe illuminates the particulates in the water to a greater degree from the side, thereby reducing the surface area of these suspensoids, which may be reflected back to the camera. Many of the camera system parameters can be modified to achieve optimal levels of performance for specific mission criteria. These modifications are made to each of the camera components, the lens; and the camera sensor, these two components are dealt with separately and described in greater detail below. 2.2.1. Lens parameters The lens is manufactured by Fujinon and is attached to the camera sensor through a C-mount connection. The specifications for this lens are summarized in Table 5. There are two important parameters of the lens which can be manually optimized. These are the focus and aperture (Appendix A.3). The focus is adjusted to account for AUV to imaging plane distance, and the aperture can aid in controlling the amount of light reaching the sensor. Additionally the aperture size together with the focal distance will determine the depth of field. Optical optimization using these two parameters was performed frequently as the lens tended to became unfocused after consecutive deployments. In order to adjust the lens some simple de-assembly/ re-assembly of  44 the Gavia nose cone is required (Figure 6 also see Appendix A.1). Adjustments were made prior to a mission according to the specific mission requirements and limitations, these details will be discussed in section 2.3.1.  45 TABLE 5 UBC-GAVIA CAMERA SYSTEM PROPERTIES This table displays the parameters which are a part of the lens, sensor, or the system as a whole.  Lens and camera system parameters Lens  Focal Length 6 mm  Magnification 0.04 x  Aperture F1.2 - F16  Image circle diameter 6.92 mm* Sensor  Dimensions 8.50 x 6.80 mm  Unit cell size 4.4 x 4.4 µm System  Angle of view (horizontal) 61.4° (~45° in-water)  35mm equivalent focal length 28.8 mm * See Appendix A.2   FIGURE 6 UBC-GAVIA CAMERA VIEW PORT This displays the underside of Gavia’s nose cone, the glass viewport cover and lens has been removed. A) CCD sensor. B) C-mount lens thread.   46 Focus is essential to form a clear, high-contrast image on the camera sensor for a given AUV to subject distance. The focus is set once the AUV to subject distance or flight path altitude above the benthic surface is known. Since focus is adjusted in air with the AUV partially disassembled, the optical consequences of the submerged system must be considered. When submerged, the light received by the camera sensor passes through water, the glass view port and then into air. To take into account the refractive index of the air/water interface the in-air focus is calibrated to 76% of the flight altitude. This is a corrective measure which will ensure that the magnified underwater images will be in focus. This assumes a refractive index of 1.33 for water. The aperture adjustment has a number of consequences. A larger aperture allows more light to reach the camera sensor but at the same time reduces the depth of field. A smaller aperture allows less light to reach the sensor but increases the depth of field. The depth of field changes with both this aperture adjustment and the focus distance. The tradeoffs between each of these parameters are best considered within the requirements of a specific deployment. Once the flight altitude is known the focus can be set along with aperture depending on light conditions. The effect of the focus and aperture settings concerning the depth of field can be examined using the lens equations in section 1.5.1. A MATLAB script is included in the Appendix A.5 which uses these equations to determine depth of field when CoC, focal length, f#, and flight altitude are input.  Additionally the script provides an in-air focus distance for the given flight path altitude along with the hyperfocal distance. The script’s output will aid in exploring the consequences of aperture and flight altitude to meet various mission requirements. This provides a basis for lens optimization to meet specific survey goals. For complete camera system optimization these physical lens settings are accompanied by software settings, which are presented next.  47 2.2.2. Camera sensor parameters Changing settings in a configuration file controls camera sensor performance. This file can be modified using a secure shell network protocol through software such as puTTy (See Appendix A.6). Within the configuration file are a number of parameters, which can be changed (for a complete list see Appendix A.6). Of these parameters the ones that most significantly impact image quality include brightness, gain, and shutter speed (Section 1.5.2). Optimization of these parameters on a mission-by-mission basis began by extensive image testing and refinement at Pavilion Lake, the primary case study test site. A site description of Pavilion Lake is presented next, followed by examples of images which have been analyzed to illustrate the usefulness of the various camera sensor properties and suggested recommendations for optimizations.  48 2.3. Case Study 1: Pavilion Lake Pavilion Lake is a hardwater, ultra-oligiotrophic, dimictic lake situated in Marble Canyon of the southern interior of British Columbia (50° 52.011’ N, 121°44.639’ W; elevation 2641m). Recorded Secchi depths at Pavilion Lake are over 15 m. The lake is comprised of three basins (North, South and Central) both the north and south basins are connected to the central basin via shallow sills of 6-10m (Figure 7). The central basin is the largest and deepest of the three basins with recorded depths of 58m over a relatively flat basin bottom. Established at Pavilion Lake, the Pavilion Lake Research Project (PLRP) is a consortium of researchers working to understand the organosedimentary structures within the lake. This group has been involved in an effort to map and understand the limnological characteristics of Pavilion Lake since 2004. Concurrent with this study, identification and mapping of organosedmentary features along the lake walls was being carried out by SCUBA diver and manned DeepWorker 2000 submersibles. Mapping efforts prior to the current survey include lake-wide sonar surveys using multi-angle swath bathymetry sonar (Mullins and Bird, 2007). This data provides researchers with georeferenced bathymetry and acoustic backscatter covering most of Pavilion Lake (Figure 7). This survey has been used as a base map by PLRP for planning the majority of exploration and scientific objectives. However, this base map does not include data for large regions of the deepest, flat area of the Central Basin (Central Plain), and does not provide habitat scale detail on benthic organisms and attributes. Prior to the current work a number of missions were run with UBC- Gavia to photograph the benthic characteristics of the central basin but returned poor quality imagery, primarily due to low light levels at 58 m depth and un-optimized camera system configuration. These images were unusable for positive identification or classification of the benthic environment. Alex Forrest has however, had some success with capturing images in the shallower and more complex bathymetry of the lake’s walls where ambient light was present.  49 Pavilion Lake offers an ideal opportunity to test the and optimize the performance of UBC-Gavia in conducting optical surveys, while at the same time contributing to the larger data sets of prior and concurrent mapping and benthic classification objectives. The purpose of the current work is to image, classify, and map this previously unexplored lacustrine benthic basin. The analysis of the imaging platform and camera system performance during this survey culminated in the development of optimization protocols for conducting photographic benthic surveys with an AUV. Optimization of image quality is achieved through the development of best practices for both the camera system and the platform. The details for achieving optimized solutions are presented in two sections. Firstly, solutions are presented for maximizing image quality. Secondly, considerations for effective mission designs which place the platform in the desired spatial context to facilitate the optimal use of the on-board camera are discussed.  50  FIGURE 7 PAVILION LAKE Multi-angle swath backscatter data from Pavilion Lake, this has been used by the Pavilion Lake Research Project to plan science and exploratory objectives. Note the missing backscatter data (white) at the lake center, where the lake is widest. Backscatter data from Mullins and Bird 2007 2.3.1. Image quality optimization AUV missions were run during field operations at Pavilion Lake in June and July 2008 to record images of the benthic surface. The first images collected showed no detail at all. These first discouraging results were addressed through repeated missions of various altitudes in an attempt to “find” the benthic surface. After altitudes were adjusted and focus reconfirmed, the benthic  51 surface became discernable. Even with these adjustments the quality of images was still insufficient to positively identify benthic characteristics as can be seen in Figure 8, column 1. To isolate the root causes of these image quality issues a number of investigative missions were run. During these missions each of the lens parameters and camera sensor parameters as discussed in sections 2.2.1 and 2.2.2 were tested by changing single parameters. In addition these missions also investigated aspects of water clarity, light availability at depth and platform performance in stability, surge (forward motion) and altitude control. Once these aspects were better understood through analysis of each mission’s results settings were manipulated to obtain a solution for the conditions of each deployment, examples of which can be seen in Figure 8, column 2. This section provides details on the factors that impact image quality and the various methods which were used to mitigate each, beginning with platform performance. Image quality is impacted by platform dynamics. Platform stability or the ability to maintain a stable orientation allows images of consistent scale to be captured, deviations from this cause image distortion. Images are distorted when the central axis of the camera lens is not perpendicular to the tangential plane of the benthic surface. For UBC-Gavia, where the camera is mounted in planar view, geometric image distortion occurs only when the platform’s orientation deviates from parallel to bottom. When this occurs, all objects in an image are at different distances from the lens, thus the pixel scale for each object differs, resulting in objects of the same size appearing larger or smaller based on their location. When this type of distortion is strong, a benthic horizon can often be seen indicating significant angular deviation from a planar view (Figure 8, 2a and 3a). Another effect with more pressing detrimental results, in that it is not easily rectified and often leaves images or parts thereof unsalvageable, is uneven lighting. This is due to uneven strobe lighting and vignetting effects caused by an image projection from the lens  52 on the CCD chip which is small compared to the chip dimensions. An example of this effect is most apparent in Figure 8, 3a. Stability issues with UBC-Gavia are encountered in two instances; when the platform executes a sharp turn (roll), or fails to maintain a parallel benthic orientation when tracking up or down slopes (pitch). During a turn, the platform banks and thus the roll angle changes. Image data gathered while the platform is executing a turn is still usable for benthic identification purposes; however, these must be discarded or rectified if accurate measurement of benthic features is required. Similarly, when pitch angle relative to the benthic surface is offset from a parallel orientation distorted images may still be usable for benthic identification.  53  FIGURE 8 OPTIMIZED IMAGES FROM UBC-GAVIA These images show the progression of optimization in Pavilion Lake. Column 1 displays initial images collected from a similar area in the lake for each row as shown in the proceeding two columns. Column 2 shows images after platform and camera optimization. Column 3 shows the final result of images through software post-processing. Black bar represents 1 meter.  54 So far, potential problems associated with platform pitch and roll have been discussed. Platform surge is responsible for another detrimental effect called motion blur. Motion blur is an important image artifact which causes features to appear unfocused. This artifact is impacted not only by platform surge but by platform altitude and various specifications of the camera system. More details on these camera system factors will be discussed further below. Motion blur is caused by platform surge during the time in which the camera shutter is open (the light integration time) to record an image. This effect is especially pronounced with longer integration times, which are typical in low light environments. Essentially, the camera sensor will receive photons during the integration time from the entire scene, which is changing position over time from the moving platforms perspective. Thus all the features in the scene will be mapped to the changing locations they appear to occupy. This results in the imaged scene being stretched along the platform’s velocity vector. For example, with a surge of 1.6 m s -1  and a shutter time of 0.04 s the features in the scene will be “stretched” 6.4 cm. When interpreting the impact motion blur will have on the recorded image quality, the factors of surge and shutter speed must be considered against the resolution of imaged features. This resolution is determined by the camera sensor resolution, lens angle of view, and the platform altitude. To incorporate a more meaningful form of image resolution, the minimum resolvable feature (MRF) is used. The MRF is defined here as the corresponding ground sample distance (GSD) of a single pixel multiplied by a factor of three to allow for a practical limit for resolving features. The pixel length was multiplied by three as features described by at least three pixels were found to be discernable. Similarly, on the Mars Reconnaissance Orbiter High Resolution Imaging Science Experiment, which has a GSD of 30 cm pixel -1 , objects of 1 meter could be resolved (McEwen et al, 2003). In other words, objects composed of 3-4 pixels were resolvable. For UBC-Gavia, the length scale of individual pixels at various focus distances were confirmed using a set of images taken with an imbedded scale, over various focus distances (see Appendix A.4). The MRF is dependent on the  55 vehicle altitude, as this defines the GSD. Figure 9 plots this relationship along with the ground sample area (GSA) which is the area the image sensor will sample from the benthic surface. Since GSD changes with altitude, so to does the impact of motion blur. For example, when imaging from a low altitude with an MRF on the mm scale a motion blur of 6.4 cm will cause imaged objects to stretch across many pixels. Conversely, when imaging from a higher altitude with an MRF of 6.4 cm the same motion blur will not be discernable. The effect of motion blur can be quantified as the ratio of motion blur to MRF, called the motion blur index.      ( )       = κ β d SV MB tsindex tan2 3 ,     (2.1) where sV  is surge (cm/s), tS  is shutter time (s); κ  is the number of pixels across one dimension of the sensor array, β is the underwater lens angle accounting for index of refraction effects through a plane glass port                   = 33.1 2 sin arcsin α β , α is the in-air angle of view (degrees) (Equation 1.3) of the same sensor transect as κ , and d is the platform altitude (cm). The MBindex is used to compare the effects of velocity and shutter time with the geometry of the lens angle of view and the sensors GSD to determine the severity of motion blur artifact for given mission variables. Figure 10 illustrates the MBindex in relation to altitude, shutter integration time and surge. Unity indicates motion blur equal to the minimum resolvable feature, this produces images of acceptable quality. However, in images with an MBindex less than 4 the motion blur artifacts remain minor and as such this was used as the upper limit for acceptable motion blur.  56 Images with a MBindex greater than 6 display significant motion blur artifacts and appear unfocused (see section 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1.0 1.5 2.0 2.5 3.0 Altitude (m) M R F  ( c m ) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 G S A  ( m 2 ) MRF GSA  FIGURE 9 ALTITUDE DEPENDENT IMAGING SCALE FOR UBC-GAVIA  Plotted in this figure are the underwater imaging scale and resolution for UBC-Gavia at various altitudes. MRF indicates the minimum feature size that can be identified for the given altitudes. GSA is the ground sensor area indicating an area of the benthic surface captures by a single image frame for the given altitudes.   57  FIGURE 10 MISSION PARAMETERS AFFECT ON IMAGE QUALITY The MBindex has a linear dependence on three variables the shutter speed, velocity and altitude. Vertical line represents the maximum recommended MBindex value of ~4  Image processing routines can be run to reduce motion blur (Pereyra and Jacoby 2008), however these algorithms are not without residual image artifacts. For this reason it is useful to calculate MBindex or refer to Figure 10 for specific deployments in order to limit potential motion blur artifacts. Since the equation was derived with the MRF in terms of camera system geometry it is applicable to any camera/ lens system mounted in a planar view. When choosing an optimal mission specific altitude, motion blur and image resolution must be considered along with some additional factors. Additional considerations include water clarity and benthic topography. In Pavilion Lake images were impacted detrimentally due to light  58 attenuation and an underpowered light source. The altitude that best mitigated these effects was chosen by analyzing the image quality from a number of trial deployments flown at different altitudes. For Pavilion Lake an altitude of 2.5 m or less allowed quality strobe dependent images to be recorded. In addition to image quality another criterion used for altitude selection was that of mission risk. Decreasing altitude allows higher quality images to be captured with more illumination; however, this also increases the risk of platform collision with benthic topographic features. These considerations taken together with all the factors discussed above are important for collecting optimized images, bringing imaging results closer to those images in Figure 8, column 2. However to fully maximize image quality further optimization is required using the camera sensor software together with the factors discussed so far. To make these additional optimizations a method is required through which individual images can be analyzed to determine their quality and with what accuracy they represent the real world imaged features. Image quality assessment and comprehensive image optimizations are presented in the following section. Image analysis and rectification Underwater image quality is difficult to determine as visual comparison amongst images is based only upon the images themselves with no physical reference to the imaged benthic surface. To assist in comparing image quality, histograms were drawn to determine pixel intensity counts of the red, green and blue (RGB) components of RGB images (Figure 11). Histograms aid in analyzing the relative intensities of these three colours which can provide insight into how accurately images depict a benthic scene, or how well their white balance is composed. The white balance refers to an accurate representation of colour in an image, with a correct white balance, white in an image will appear white. A proper white balance occurs when the intensities of all colours are represented without bias to one individual colour (Figure 11A). Images  59 collected underwater are subject to poor white balance due to the non linear attenuation of individual wavelengths (see section 1.4). The shortest wavelengths (blue and green) are least attenuated and thus their intensity in underwater images is higher relative to red (Figure 11B). For this reason underwater images have a tendency to appear blue or green. In Figure 11B, the histogram shows this loss of green and red relative to blue intensity resulting in an offset white balance, which does not accurately represent the spectral properties of the benthic area imaged. An image that has been optimized limits this colour offset (Figure 11A). As a guide for optimizing images histograms are drawn for a number of images with various image quality issues. For each of these images recommendations for optimizations are presented. These examples are applicable to strobe dependent environments; all images were collected in the central basin of Pavilion Lake.  60  0 50 100 150 200 250 0 2000 4000 6000 8000 10000 12000 14000 16000 18000   Red Green Blue A 0 50 100 150 200 250 0 2000 4000 6000 8000 10000 12000 14000 16000 18000   Red Green Blue B FIGURE 11 HISTOGRAMS OF BENTHIC IMAGES These image histograms represent the pixel counts (y axis) for each colour channel (Red, Green and Blue), and the intensity (x axis) distribution of those counts. A) All three colour channels in this histogram share similar intensity distributions. This represents a good image, where the colours in the image are similar to those of the feature imaged. B) This histogram represents an image of poor quality. The intensity distributions of the three colours are not even, red intensities are significantly lower. This is due to the high attenuation of red wavelengths in water.  61 Dark images  The darkness of the image shown in Figure 12 makes it difficult to interpret the nature of the plane imaged. A noticeable feature of this image is the digital noise generated from a high gain setting (25 dB). This is visible from the light white bars which pass diagonally from upper left to lower right of the image. Noise is also visible in the fine vertical streaks, which appears almost as a texture covering the entire image. The colour histograms from this image show the three colour bands of which it is composed and the respective relative intensity counts for the entire image (Figure 12). The red intensity is lower than green intensity, which is in turn lower than blue intensity. This does not represent a good white balance. Thus the image colour is not an accurate spectral representation of the area being imaged. Corrections, which may be implemented to improve the type of quality issues affecting this image, can be made through a number of settings. One important correction is to adjust the imaging platform altitude. The histogram is a good indication that this distance is too great since there is a loss in the intensity of the colours that are attenuated the strongest in water. The additional height will leave images with a blue/green appearance as the reds are underrepresented. For black and white images the colour bias is not visible; however, reds will still be under represented in the image. The platform altitude should be reduced according to the turbidity of the water. Separate from this particular example, another factor causing dark images in general includes too little gain. For our system, we have found that in general the gain setting should not exceed 17 dB, higher settings are associated with detrimental noise artifacts as seen in this example. The gain may be set higher in situations where the signal to noise ratio is higher such as in instances where there is additional light available, lower flight altitudes, or exceptional water clarity.  Additionally if the vehicle surge is low the shutter speed can be decreased  62 allowing more light to be collected on the CCD also brightening the image. Finally the brightness setting can also be increased to raise the light level of the entire image.  0 50 100 150 200 250 0 2000 4000 6000 8000 10000 12000 14000 16000 18000   Red Green Blue  Altitude (m)  MBindex Depth (m) Surge (m/s) Gain (dB) Shutter (s) Brightness (%) 4.54 4.30 51.3 1.52 25.9 0.040 0 FIGURE 12 DARK IMAGE This image from Pavilion Lake is dark with little discernable detail.  Over saturated images  This image (Figure 13) appears brighter than the last example but still has some quality issues. Image contrast is not high making features hard to identify. In the darker areas of the image some artifacts of noise can be seen predominantly in the form of thin vertical streaks. The noise artifacts and the over saturation of light in this image are caused by high gain and slow shutter speeds. The slow shutter speed also causes some artifacts of motion blur to be visible in the form of blurring in the vertical direction. As can be seen in the associated histogram, the red intensity relative to green and blue is higher than that of the previous example. This is due the reduced platform altitude above the image plane. In this image more red light is able to reach the sensor than in the previous example where the longer light path travel time results in more red attenuation.  63 To correct images of this nature the gain must be reduced to the maximum recommended 17 dB or lower depending upon the light levels. The shutter speed should be increased to a value closer to 0.01 s for the velocity the platform is traveling at in this example to keep the MBindex within in the recommended range (under 4). These two modifications will eliminate much of the noise and over exposure, restoring a significant amount of the lost contrast. Reducing the shutter integration time will also reduce the extent of motion blur giving the image a more focused appearance.  0 50 100 150 200 250 0 2000 4000 6000 8000 10000 12000 14000 16000 18000   Red Green Blue  Altitude (m)  MBindex Depth (m) Surge (m/s) Gain (dB) Shutter (s) Brightness (%) 2.51 6.49 53.8 1.27 25.9 0.040 0 FIGURE 13 OVER SATURATED IMAGE This image from Pavilion Lake is over saturated; contrast is weak and details lacking.   64 Unfocused  Unfocused images are lacking in clarity and contrast (Figure 14). Figure 14A, is over-exposed while Figure 14B is under-exposed. For Figure 14A, the red intensity in the histogram is less intense than the blue/green intensities, indicating that the platform altitude is too high. For Figure 14B the histogram intensities are relatively equal indicating a better white balance associated with the lower altitude from which this image was captured. There is also a difference in shutter speed and surge between these two images. For Figure 14A, the platform is traveling faster with a slower shutter speed than in Figure 14B which is associated with a reduced velocity and increased shutter speed. Slow shutter speeds and high velocities cause motion blur, which also contributes to an unfocused image (e.g. Figure 14A). To correct images with these issues a correct lens focus should first be confirmed. The lens focus can change over successive deployments and the image can become out of focus if the flight path altitude is altered. The high gain setting (25 dB in both images) with its associated digital noise also contributes to an unfocused appearance and should be reduced. With gain reduced and focus confirmed, the remaining fix is to find a correct balance between shutter speed and surge for a given altitude (See Equation 2.4 and Appendix A.5). For Figure 14A motion blur effects are visible whereas in Figure 14B they are not due to the lower MBindex. As an additional consideration all underwater images will show some degree of scattering, especially as the distance and/or turbidity of the water increases between the camera and the subject. As mentioned previously this results in a loss of contrast in the imaged object reducing the image quality, and causing an unfocused appearance.  65    0 50 100 150 200 250 0 2000 4000 6000 8000 10000 12000 14000 16000 18000   Red Green Blue A    0 50 100 150 200 250 0 2000 4000 6000 8000 10000 12000 14000 16000   Red Green Blue B Altitude (m) MBindex Depth (m) Surge (m/s) Gain (dB) Shutter (s) Brightness (%) 3.03 6.56 53.1 1.55 25.9 0.040 0 2.66 0.74 53.8 1.22 25.9 0.005  0 FIGURE 14 UNFOCUSED IMAGE In these images A and B and associated histograms details are lacking and contrast is poor.  66 Optimized image  This image (Figure 15) shows high contrast and represents good settings. The image is still a little dark, which is due to underpowered strobe output. The colour histogram displays a good white balance. However, there are a few potential improvements that could be achieved through decreasing surge and shutter speed. Additionally, the platform altitude could be decreased to increase the light level of the image thereby, increasing contrast and colour intensity. Decreasing the platform altitude increases the effect of the underpowered strobe. Note that changing altitude also requires an adjustment of shutter speed and surge to stay within acceptable MBindex ranges (< 4). With the optimizations detailed in each of the sample images presented here and previous section, results can be achieved such as those seen in Figure 8, column 2. These settings are summarized in Appendix A.7. Additional image examples can also be found in Appendix A.8. 0 50 100 150 200 250 0 2000 4000 6000 8000 10000 12000 14000 16000 18000   Red Green Blue  Altitude (m)  MBindex Depth (m) Surge (m/s) Gain (dB) Shutter (s) Brightness (%) 2.25 1.78 53.9 1.25 17.0 0.010 0 FIGURE 15 OPTIMIZED IMAGE This image taken at Pavilion Lake represents a quality image where features are readily identifiable, this allows for the identification and classification of benthic features.   67 A final image optimization was made to demonstrate the potential of software post-processing on three sample images (Figure 8, column 3). Image processing consisted of creating a duplicate of the original image. The duplicate image was overlain on the original image and a Gaussian blur filter was used to reduce the effects of scatter, transparency between the two layers was then adjusted to maximize image detail. Each of the two images was manipulated separately to adjust the colour levels in each the red green and blue RGB channels. Predominantly, the red intensity was increased to restore the red lost in the water column through attenuation. This effectively creates an image with colours true to the benthic surface while the process of overlaying the original image with a duplicate enhances the contrast. As can be seen in Figure 8 this procedure is effective in reducing the scattering effect present in underwater images, which reduces contrast. This post processing of the images is a final step in image optimization, which along with the preceding information can generate images such as those in Figure 8 (3a-c). In the following section, considerations for the design of platform flight paths are presented discussed and analyzed 2.3.2. Mission design So far the specifics of image quality have been discussed through an analysis of the information contained within the images. The focus of this section is the various aspects of operating the AUV in order to maximize the potential for quality image collection. Images were collected though designed missions consisting of a deployment of the platform with a programmed set of parameters defining its flight path trajectories. The mission objectives were to image, map and classify the benthic surface of the Central Basin. Thus the key factors in mission design were to facilitate the optimal use of the on-board camera system. Image collection over the survey area was accomplished through repeated, offset, across basin transects (Figure 16). All transects were run with the platform on bottom track mode; in this mode the vehicle adjusts its altitude to  68 maintain a set altitude above the benthic surface. The chosen altitude set point depends upon the properties of the water and the mission objectives. To best determine the affect of different mission parameters, images were analyzed post mission. This was accomplished in a series of initial test missions with the objective of better understanding platform performance and implications for image collection. The following section details performance issues and rectification strategies learnt in the course of this work to overcome bathymetric challenges while maximizing image quality return.  FIGURE 16 UBC-GAVIA MISSION TRACKS IN PAVILION LAKE Gavia mission tracks across the Central Basin, primarily focusing on the area with no sonar coverage. Backscatter data from Mullins and Bird 2007.  To maximize platform performance with respect to imaging requirements, the AUV’s in-water dynamics must be understood along with some knowledge of a site’s bathymetry. The  69 bathymetry becomes important in photographic applications as the platform must approach and maintain a close proximity parallel to the benthic surface. UBC-Gavia is well suited for the relatively gently sloping lower walls of Pavilion Lake’s Central Basin which approach the flatter basin bottom. Its torpedo shaped hall allows for stable, energy efficient flight over small slope angles. Conversely, steep vertical changes in bathymetry pose a challenge. For UBC-Gavia, ascent and descent angles are limited to less than ~25°. Mission flight paths were thus restricted by this limitation and were designed in accordance based on preexisting knowledge of bathymetry. The bathymetry of the Central Basin has relatively steep walls descending to a mild slope that flattens out at the basin’s bottom (Figure 17). The planar basin bottom can be traversed effectively by UBC-Gavia with altitude variations within ca 10 cm from the bottom tracking set point. For example, Figure 17-19 display flight track profiles along with the associated histogram of vehicle altitude, in these figures an even vehicle altitude above the horizontal lake bottom is evident. However, the AUV flight track profiles also indicated difficulty in the AUV’s ability to track the bottom along the steep walls of the basin (Figure 18 and Figure 19). In these figures flight paths with vehicle altitudes higher than the set altitude are visible during down-slope tracking and lower altitudes occur during up-slope tracking. This indicates that when attempting to follow a set altitude above the bottom, the slope of the bottom exceeds the accent or decent angle capability of the platform. This type of performance is indicated by histograms of flight altitude displaying a bimodal distribution (Figure 18 (a) and Figure 19 (a)). Down-slope tracking resulted in a few mission aborts due to vehicle altitude exceeding expected limits of the flight path’s set altitude. Reduced altitudes on up-slope tracking posed a risk of bottom collision. During the course of field operations at Pavilion Lake bottom collisions occurred three times requiring retrieval measures. All altitude histograms also display a strongly leptokurtic  70 distribution indicating good performance of the AUV control algorithms in maintaining altitude set points. To mitigate platform limitations when encountering steep bathymetry subsequent modifications made to the flight paths. 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 0 50 100 150 200 250 300 350 400 450 Altitude (m) C o u n ts Mission 1626  0 200 400 600 800 1000 15 20 25 30 35 40 45 50 55 60 Time (1/4 sec) D e p th  ( m ) Mission 1626   Vehicle depth Total depth  FIGURE 17 UNIMODAL BOTTOM FOLLOWING FROM OPEN-WATER DESCENT Open-water descent  and bottom tracking with set flight altitude of 2.5 m. a) Histogram of altitude counts. b) Profile of vehicle flight path (depth) above lake bottom (total depth) for the same mission. The open- water descent does not require bottom tracking along the steep basin walls and thus the there is less difficulty in maintaining the set altitude. The histogram distribution is unimodal. Note: All bottom tracking figures (Figure 17, Figure 18 and Figure 19) display a slight positive skew associated with the AUVs’ initial approach to the bottom, during this approach as it navigates through the water column sporadic bottom readings are collected before the set bottom track altitude is reached.  0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 0 50 100 150 200 250 300 350 Altitude (m) C o u n ts Mission 1741  0 200 400 600 800 1000 1200 15 20 25 30 35 40 45 50 55 60 Time (1/4 sec) D e p th  ( m ) Mission 1741   Vehicle depth Total depth  FIGURE 18 BIMODAL BOTTOM FOLLOWING DOWN-SLOPE Down-slope bottom tracking altitude distributions with a set bottom tracking altitude of 2.5 m, (a) shows a positive minor mode and major negative mode. A second peak of higher attitude indicates AUV difficulty in maintaining the set flight altitude as the AUV traverses down-slope. This is due to the down tending slope angle exceeding the angle of descent capability of the AUV. In (b) the vehicles descent profile is displayed. a a  b b  71 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 0 100 200 300 400 500 600 Altitude (m) C o u n ts Mission 1417 800 1000 1200 1400 1600 1800 2000 2200 15 20 25 30 35 40 45 50 55 60 Time (1/4 sec) D e p th  ( m ) Mission 1417   Vehicle depth Total depth  FIGURE 19 BIMODAL UP-SLOPE BOTTOM TRACKING Up slope bottom tracking altitude distribution for a set flight altitude of 2.2 m, (a) Shows a negative minor mode and a major positive mode. A secondary distribution of lower altitudes indicates AUV difficulty in maintaining set point altitudes as it traverses up slope. This is due to the up-slope angle exceeding the ascent angle capabilities of the AUV (b).  Modifications were needed to address the issues associated with both down-slope tracking and up-slope tracking. To eliminate the occurrence of down-slope tracking failures the decent trajectories were performed in open-water at some distance away from the basin walls (Figure 17). With this mission design the AUV would descend to the set flight path altitude and bottom track across the basin at the set altitude. At the end of across-basin transects, a 180° turn was executed to follow a parallel return transect off-set by several meters. The turn was executed before the vehicle reached the opposite basin wall to confine the survey area to that of the basin bottom and to reduce the risk of bottom contact with the steep slope of the opposite wall. The return transect was then terminated before the basin walls near the start point were encountered. This was an important modification, which reduced the risk of AUV bottom encounter as there is a significant risk of this when ascending along a steep slope while bottom tracking at low altitudes (Ca. < 3 m) (Figure 19). Using the open-water descent mission design bottom tracking failures were eliminated. a b  72 All missions consisted of a single outbound transect and a single return transect, which allows for a quicker mission execution to surface return time. There are two advantages to this; first search and recovery area is smaller should mission error occur and second there is a quicker mission execution to return time. The quick execution to return time and was important in that an analysis of vehicle performance between missions allowed solutions to be generated and corrections to be made thus avoiding extended periods of un-optimized data collection. Another mission design initially considered was one in which the AUV performed a “lawnmower” type pattern across the Central Basin. This mission design consists of extended duration missions were the AUV performances multiple, connected, parallel transects. However, this design was not chosen due to issues associated with higher positional errors which accumulate and become significant as the vehicle is submerged for extended periods of time. In total 64 missions were executed at Pavilion Lake (Table 6). 45% of these missions failed. This statistic may be misleading in that it includes all those missions for which initial tests and performance analysis were the objective. Once solutions were implemented to address AUV performance as discussed above only one mission failure occurred throughout the remaining duration of the survey. In summary, the most effective tactic for gathering image surveys in unknown conditions is to begin with short trial missions from which mission data and images can be quickly uploaded and analyzed between deployments. This allows for the rapid retrieval of necessary information on the unknown subsurface conditions, platform behavior, bathymetry challenges and limits time spent collecting unusable data. The platform and camera system working effectively in conjunction enables the highest quality images to be recorded. Through the lessons learned at Pavilion Lake, final optimized settings were obtained (Figure 8 and Figure 15). These images provided sufficient image quality to interpret fine scale benthic  73 features; these optimizations at the same time provide some guidelines for conducting surveys at other locations. To further develop the imaging capability of the AUV platform several other sites were imaged. The modifications and additional considerations in survey methods and platform optimization for subsequent test sites are reviewed below. TABLE 6 MISSION SUMMARY Mission summary for Pavilion Lake benthic mapping surveys. (Area mapped is based on image footprint (6.5 m 2 ).   Number of missions Distance traveled (km) Area mapped(km 2 ) Total missions 64 27.480 0.069 Successful missions  35 24.440 0.061 Failed missions 29 - -  Total submerged mission time 6:21 Hours  Successful submerged mission time 5:39 Hours  Basin Area 0.45 km 2  Area mapped per hour 0.013 square km 2  (12886 m 2 )  2.4. Case Study 2: Kelly Lake Kelly Lake is located at 51° 0.363’ N, 121° 46.772’ W, in the southern interior of B.C., Canada. It is approximately 16 km west of the town of Clinton at an elevation of 1068 m situated at the bottom of a high relief valley. Kelly Lake is a hardwater lake with a length of 1.5 km, maximum width of 400 m and a surface area of 44 hectares (Figure 20). Secchi depths are similar to those recorded in Pavilion Lake at 13m. Kelly Lake is situated 15 km north of Pavilion Lake, and has similar basin bathymetry with steep walls dropping to a flatter basin bottom. The lake is deepest in the North Basin with a maximum recorded depth of 41 meters. Within Kelly Lake are microbialites similar in morphology to those found in Pavilion Lake. The microbialites in Kelly Lake were discovered by members of the Pavilion Lake Research Project in 2004, since then this site has been visited in conjunction with research activities occurring at Pavilion Lake.  74 UBC-Gavia was used to investigate the northern extent of Kelly Lake where the maximum depth was recorded. The survey was completed using the techniques and settings developed and optimized for Pavilion Lake (Figure 15). Missions were run in June 2008 and consisted of simple away from shore and back transects. These were modified slightly after AUV flight path characteristics were analyzed. The analysis revealed issues with AUV navigation of the steep slope angles of the basin walls. These slopes exceeded the ascent/descent capabilities of the AUV similar to the basin slopes in Pavilion Lake. In subsequent missions the descent stage of the flight path was initiated away from the basin wall and the ascent stage was initiated before the basin walls were encountered. Furthermore transects all occurred parallel to the lake’s longest axis (north-south), this allowed for bottom time of each transect to be maximized. Additionally this meant only one major basin wall would be transected and the ascent stage would encounter only a minor basin wall on the south side (Figure 20 and Figure 22).  75  FIGURE 20 KELLY LAKE In this figure the missions run in north Kelly Lake are shown. GIS data from GEOBC date of data 2005 09 01 WSA Stream Routes (1:50K)  After each mission image quality was analyzed. Initial images using settings successful in Pavilion Lake were sufficient to allow identification of benthic features (Figure 21). However, these images were not of the same clarity as those images collected in Pavilion Lake (Figure 8) suggesting there may be stronger light attenuation in the water column of Kelly Lake. The difference in images between the first a latter study site is primarily seen in the original images before post processing. Potential solutions may entail re-deploying in periods where turbidity is  76 reduced or further decreasing flight path altitude. A decreased flight path altitude reduces the volume of water being imaged through and thus limits the impact of high turbidity. Finally using the depth readings from the combined imaging missions a bathymetric map for Kelly Lake was generated (Figure 22).    77   FIGURE 21 IMAGES FROM KELLY LAKE In this image features are discernable allowing for the identification of benthic features however in some images details are not as visible as in those images from Pavilion Lake. Higher turbidity in the bottom 3 meters of Kelly Lake may be responsible for these differences. Images 1 through 3 (a) are the original images whereas images 1 through 3 (b) are the corresponding images post processed. Black bar = 1 m     78   FIGURE 22 KELLY LAKE BATHYMETERY This figure displays bathymetery data gathered during imaging missions,. 2.5. Case Study 3: Lake Tahoe Lake Tahoe is located east of San Francisco on the border between California and Nevada (39° 6.532’N, 120° 2.2390’W) at an elevation of 1895 m. This lake has a maximum width and length of 35 and 18 km respectively with a maximum depth of 501 m. The current work however was focused only on the lake’s parameter in shallow water (ca. <60  meters). Lake Tahoe is well known for its clear waters with secchi depths greater than 35 m (Tahoe Environmental Research Center 2009). In collaboration with University Delaware and the Tahoe Environmental Research Center, photographic surveys were completed around the lake’s circumference in August 2009.  79 The study’s aim was to assess the extent of dispersion of an invasive clam species (Figure 23). The initial camera configuration for Lake Tahoe was configured to match the optimized settings from Pavilion Lake. These settings were chosen based on the fact that there were a number of similarities in the imaging environment of each site. These similarities included relatively clear waters and strobe dependent lighting. The photographic surveys in Lake Tahoe however consisted of an additional challenge, which was to image in shallow water where ambient light was present while retaining inter-image consistency. When ambient light is present, light levels between photographs are variable depending on depth, time of day and meteorological conditions. Light intensity also varies on the benthic surface due to refracted light at the air/water interface of the lakes surface. To eliminate these issues missions were run at night. The night missions effectively normalized the images with the only illumination source originating from the on-board strobe. An additional advantage of night mission was the reduced water traffic on the lake compared to heavy traffic in daylight hours. Initial images collected using optimized Pavilion Lake settings were useable and allowed benthic features to be identified. However, images did display a slight unfocused quality. To determine the source of settings which may cause this slight loss in image quality, images were taken in a controlled setting in shallow water. In shallow water the platform can be positioned on the surface over the benthic surface at an altitude for which the lens has been focused to. In this configuration the action of the strobe and clarity of the water can be visually confirmed. Additionally, since the AUV is at the surface, live connection can be established through the Wi- Fi connection and immediate image feedback can be gathered. Through this procedure it was determined that focus could be improved. Upon recalibrating the focus, images were improved examples of which can be seen in Figure 23. The results of the study are in ongoing development and were presented at the 2010 PPNW conference (Forrest et al. 2010)  80   FIGURE 23 IMAGES FROM LAKE TAHOE In image (a) the invasive clam species under study can be seen as small white features. In image (b) local fauna was photographed. Black bar = 1 m.  81 3. RESULTS The previous sections have dealt with the image optimization process; this section visually presents the optimized imagery collected. In this section only data collected in Pavilion Lake will be considered as this was the primary site of investigation. The image data is first classified and then mapped to provide context for benthic type distribution in Pavilion Lake’s Central Basin. Additionally bathymetric maps are generated using depth data associated with each image 3.1. Benthic photographic mapping: Pavilion Lake In total over 51,000 quality images were collected through photographic transects in Pavilion Lake. A subset of these images was viewed in order to determine and classify benthic characteristics. These initial observations revealed a number of distinct and identifiable benthic characteristics which were classified into seven benthic types. These benthic characteristics included the presence of/or lack of certain features, substrate types or identifiable surficial patterns. The seven benthic types are as follows: (1) Rubble, which included rubble larger than a decimeter and small scale gravel overlaying sediment. (2) Microbialites, whole or fragmented, overlaying sediment. (3) Sediment, which persisted where no other features or patterns existed, exposed sediment was identifiable as a bright relatively homogeneous surficial cover. (4) A pattern consisting of many, small (decimeter scale) light colored circular features. Another pattern identified consisted of a series of intersecting lines reminiscent of a network covering the benthic surface. This reticulated network pattern was classified by an additional three categories (5) low, (6) medium or (7) high, relating to coverage intensity. This indicated the level of the pattern’s surficial dominance in a given image frame. Each of these identifiable benthic  82 characteristics described a benthic type and a framework within which all images could be rated. In instances where shipwrecks or other anthropogenic artifacts were discovered, the surficial type upon which they were resting was recorded. Reference images were determined for each of the seven benthic types (see Figure 24 and Figure 25) to provide a consistent reference by which all images could be graded against. Each image was then given a single rating corresponding to its dominant benthic type.  FIGURE 24 BENTHIC NON-NETWORK TYPE CLASSIFICATIONS Seven benthic types were determined by which all usable images were classified. In this figure, four benthic types are represented; these images provide a reference for each of their respective benthic types. Black bar represents 1 m.   83  FIGURE 25 BENTHIC NETWORK TYPE CLASSIFICATIONS. Seven benthic types were determined by which all usable images were classified. In this figure, the remaining three benthic types are represented; these images provided a reference for each of their respective benthic types. Black bar represents 1 m.  To classify each image according to the above established benthic type classification scheme a MATLAB program was used. This program consecutively called and displayed images from a specific mission and prompted the user to input one of the designated benthic types. The data product consisted of an array of geo-referenced benthic type ratings. Additionally, platform depth and altitude data was included for each image. From this data set benthic substrate type could be mapped and displayed in reference to the lake as a whole see Figure 26.  84  FIGURE 26 MISSIONS CLASSIFIED ACCORDING TO BENTHIC TYPE Benthic map displaying the distribution of seven benthic type classifications.  In addition to the map of benthic types, the density of across-basin transects provided sufficient data to create a bathymetric map of the Central Basin. Bathymetry was produced for the survey area by combining the recorded depth and altitude parameters recorded with each image. Using MATLAB scripts were written to distribute these geo-referenced total depth values onto to a grid reference were contours could be generated (Figure 27).  85  FIGURE 27 PAVILION LAKE CENTRAL BASIN BATHYMETERY Bathymetry generated from photographic missions  86 4. DISCUSSIO With the use of a camera equipped AUV, underwater surfaces can by explored and investigated in photographic detail. The methods developed in this paper were effective at improving unidentifiable images to the extent that fine scale benthic features could be identified. These methods required considerable manipulation of the automated platform system settings. Once optimizations were made data collection proceeded with little error or need for further modification of the AUV system. The data collected was successfully rendered into a map of benthic types (Figure 26), revealing the nature of the benthic surface through out the Central Basin. The photographic resolution of this survey could not be duplicated by the previous sonar survey due to the non acoustic reflectivity of the soft substrate. Additionally the photographs were able to resolve details of colour associated with benthic surface features. Furthermore the photographs enabled the unexpected discovery of benthic flora situated in the deepest areas of the Basin (Figure 25 and Figure 26). The high resolution and colour sensitivity to camera allowed for the detection of this unexpected benthic epipelic flora. This example illustrates the value of collecting benthic images and their importance in describing the processes that occur there. The methods developed here will hopefully serve to accelerate the useful application of UBC-Gavia as a benthic imaging tool. The development of methods for capturing photographs in such low light environments opens up deep regions, which are generally inaccessible or more dangerous to divers. The camera optimization settings developed here appear to be robust in their application to other sites. The optimized settings for Pavilion Lake produced usable imagery in both Kelly Lake and Lake Tahoe. Lake Tahoe in particular  87 shared similarities in water clarity allowing for high quality image collection. On the other hand, Kelly Lake appeared to have considerable light attenuation at depth. Despite this the imaging system was still able to perform sufficiently to enable the identification of benthic features. In this work the Central Basin of Pavilion Lake was imaged extensively with 35 interconnecting transects totaling over 27 km. The concentrated extent of photographic transects by AUV over the Central Basin performed in this study is unique in the literature. Where most photographic imagery is collected in single transects, in grouped parallel transects or intersecting ones over a smaller area such as a ship wreck. The extent of imagery collected in this study, allows for high resolution identification and mapping of benthic features. However, it also requires extensive image analysis time. The identification and classification of 51000 images took approximately 30 hours. Additionally much time was spent extracting and managing the large database of associated imaging information. Fewer transects and thus fewer images would still identify and define the physical nature of the Central Basin though to a lesser spatial resolution. The advantage of using an AUV for imaging is that it can perform autonomously. This aids in the imaging task as the small footprint of recorded images require that the given survey area be traversed extensively if its description at a photographic resolution is desired. Once imaging mission and camera system settings were optimized, missions were run consecutively solely by the author with little operator effort. This could be continued until platform battery was depleted, at which point a recharge was required for subsequent deployment. Using towed or ROV platforms requiring constant operator control would have required additional operators and effort with decreased positional accuracy.  88 Due to the speed at which images were collected (3.75 frames per second) many images are over lapped and may be combined to form mosaics. Mosaics are the stitching together of images along the vehicle track. This affords a truly natural vision of the benthic surface. To collect images for this type of work there are a number of considerations that must go in to the initial mission plans that will allow Gavia to collect images suitable for this purpose. Such optimizations for mosiacing include slowing the vehicle down or increasing its altitude, both of these measures will ensure a greater overlapping of images. By increasing the image overlap, image stitching software is better able to match pixels. Since there is a considerable amount of vignetting and hue/brightness differences amongst images, stitching software also normalizes colour saturation and other effects. Figure 28 shows an example of a track mosaic in the central basin of Pavilion Lake, and features discovered in Kelly Lake.  89  FIGURE 28 IMAGE MOSAIC OF A DEEP MOUND This figure shows a deep mound at 47 m in Pavilion Lake. This image was created using ICE a free image mosaicing tool developed by Microsoft. (http://research.microsoft.com/enus/um/redmond/groups/- ivm/ICE/). On the next page are two mosaics from Kelly Lake showing a “ship” wreck and encrusted macrophytes. These images have been post processed to bring out detail of the imaged features, the original images can be seen in the appendix.   90    91 5. CO
S UBC-Gavia was able to successfully collect photographic images in a light limited lake basin. Images in ambient light environments have been collected with UBC-Gavia previously and are not associated with the same technical challenges. However, in a light limited environment images had not been widely collected before with much success. The methods and optimizations developed here made possible for the first time a visual survey of Pavilion Lake’s Central Basin using UBC-Gavia. This imagery was used to produce a benthic map of benthic characteristics revealing the spatial distribution of surficial features. The AUV performed well within the Central Basin bathymetry and was able to gather sufficient data to facilitate and guide future exploratory objectives. In the summer 2009 Deepworker submersibles were diverted from other objectives based on the findings of the AUV survey missions in this paper and visited the Central Basin, confirming and adding to the description of benthic characteristics (Figure 29). Future work most importantly would investigate the potential association of environmental parameters with the unique epipelic patterns discovered at depth. Furthermore the developments described in this paper will enable UBC-Gavia to be deployed more effectively in benthic exploration activities in the future. The present imagery survey offered a unique opportunity to explore and inspect a remote and inaccessible benthic surface at 60 m depth. With this opportunity unexpected benthic fauna was encountered, rewarding the first visual explorations of a new environment.  92  FIGURE 29 A 2009 MISSION IMAGE FROM THE DEEPWORKER SUBMERSIBLE This image was captured in the deep basin of network benthic type.  93 REFERE
CES  Ambrose WG, Quillfeldt CV, Clough LM, Tilney PVR, Tucker T.  2005. The sub-ice algal community in the Chukchi Sea: large- and small-scale patterns of abundance based images from a remotely operated vehicle. Polar Biology. 28: 784-795. Andrews B. 2003. Techniques for spatial analysis and visualization of benthic mapping data. Newport (RI): Science Applications International Corporation. 63 p. Armstrong RA, Singh H, Torres J, Nemeth RS, Can A, Roman C, Eustice R, Riggs L, Garcia- Moliner G. 2006. Characterizing the deep insular shelf coral reef habitat of the Hind Bank marine conservation district (US Virgin Islands) using the Seabed autonomous underwater vehicle. Continental Shelf Research. 26: 194-205. Auster PJ, Malatesta RJ, Donaldson CLS. 1997. Distribution responses to small-scale habitat variability by early juvenile silver hake, Merluccius bilinearis. Environmental Biology of Fishes. 50: 195-200. AUVAC AUV systems by manufacturer. (Retrieved February 18 2010) http://auvac.org/resources/browse/configuration/ Ballard RD, McCann AM, Yoerger D, Whitcomb L, Mindell D, Oleson J, Singh H. 2000. The discovery of ancient history in the deep sea using advanced deep submergence technology. Deep Sea Research Part I: Oceanographic Research Papers 47(9): 1591-620. Ballard RD. 2008. Archaeology oceanography. New Jersey: Princeton University Press. 296 p. Barrie JV, Lewis CFM, Parrott DR, Collins WT. 1992. Submersible observation of an iceberg pit and scour on the Grand Banks of Newfoundland. Geo-Marine Letters. 12: 1-6. Braulik S. 2007. Underwater imaging on the great lakes to locate deep wrecks. Bachelor of Science., University of Wisconsin-LA Crosse.  94 Cailliet GM, Andrews AH, Wakefield WW, Moreno G, Rhodes KL. 1999. Fish faunal and habitat analyses using trawls, camera sleds and submersibles in benthic deep-sea habitats off central California. Oceanologica Acta. 22(6): 579-592. Clarke EM, Tolimieri N, Singh, H. 2009. Using the Seabed AUV to assess populations of groundfish in untrawlable areas. in Beamish RJ, Rothschild BJ, (eds). The future of Fisheries Science in North America. Fish and Fisheries Series. Springer Science + Business Media B.V. p 357-372. Coleman DF, Ballard, RD. 2001. A highly concentrated region of cold hydrocarbon seeps in the southern Meditranian Sea. Geo-Marine Letters. 21: 162-167. Cranmer TL, Ruhl HA, Baldwin RJ, Kaufmann RS. 2003. Spatial and temporal variation in the abundance, distribution and population structure of epibenthic megafauna in Port Foster, Deception Island. Deep-Sea Research II. 50: 1821-1842. Delaporta K, Jasinski ME, Soreide F. 2006. The International Journal of Nautical Archaeology 35(1): 79-87. Delta Oceanographics Submersible. (Retrieved February 7 2010) http://www.deltaoceanographics.com/sub.html  Diaz RJ, Solan M, Valente RM. 2004. A review of approaches for classifying benthic habitats and evaluating habitat quality. Journal of Environmental Management 73: 165-81. Dupre S, Buffet G, Mascle J, Foucher J-P, Gauger S, Boetius A, Marfia C, The AsterX AUV team, The Quest ROV team, The BIONIL scientific party. 2008. High-resolution mapping of large gas emitting mud volcanoes in the Egyptian continental margin (Nile Deep Sea Fan by AUV surveys). Marine Geophysical Reserve 29: 275-290. Duntley SQ. 1963. Light in the sea. Journal of the Optical Society of America. 53(2): 214-233. Ewing M, Vine A, Worzel JL. 1946. Photography of the ocean bottom. Journal of the Optical Society of America. 36(6): 307-327  95 Felley JD, Vecchione M, Wilson Jr RR. 2008. Small-scale distribution of deep-sea demersal nekton and other megafauna in the Charlie-Gibbs Fracture Zone of the Mid-Atlantic Ridge. Deep-Sea Research II. 55: 153-160. Fonseca P, Correia PL, Campos A, Lau PY, Henriques V. 2008. Fishery-independent estimation of benthic species density—a novel approach applied to Norway lobster nephrops norvegicus. Marine Ecology Progress Series. 369: 267-71. Forrest AL, Laval B, Pieters R, Lim D.S.S. 2008. Convectively driven transport in temperate lakes. Limnology and Oceanography 53(5, part 2) : 2321-2332. Forrest A, Witmann M, Allen B, Schmidt V, Raineault NA, Pike W, Hamilton A, Kost LP, Laval BE, Trembanis AC, Schladow G. 2010. Benthic imagery survey of Asian clam in Lake Tahoe. Physical Processes in Natural Waters 2010. Reykjavik, Iceland Forrest AL, Laval B. 2007a. Charting lacustrine environments with UBC-Gavia, in Collins KJ and Griffiths G. (eds). Proceedings of the international workshop on autonomous underwater vehicle science in extreme environments held at the Scott Polar Research Institute, Cambridge, 11-13 April 2007. London: Society for Underwater Technology 202: 99-105. Forrest, AL, Laval, B. 2007b. Seasonal thermal structure of Pavilion Lake, in Collins K.J. and Griffiths G. (eds). Proceedings of the international workshop on autonomous underwater vehicle science in extreme environments held at the Scott Polar Research Institute, Cambridge, London: Society for Underwater Technology 202: 106-112. Fosså JH, Linberg B, Christensen O, Lundalv T, Svellingen I, Mortensen PB, Alvsvag J, 2005. Mapping of Lophelia reefs in Norway: experiences and survey methods, in Freiwald A.B. and Roberts J.M. (eds)., Cold-Water Corals and Ecosystems. Berlin, Heidelberg., Springer-Verlag: 359-392. Grasmueck M, Eberli G.P, Viggiano DA, Correa T, Rathwell G, and. Luo J. 2006. Autonomous underwater vehicle (AUV) mapping reveals coral mound distribution, morphology, and oceanography in deep water of the Straits of Florida. Geophysical Research Letters 33: L23616, 6 pgs  96 German C, Yoerger DR, Jakuba M, Bradley A, Shank TM. 2007. Hydrothermal exploration using WHOI’s ABE AUV. in Collins KJ and Griffiths G., (eds). Proceedings of the international workshop on autonomous underwater vehicle science in extreme environments held at the Scott Polar Research Institute, Cambridge, 11-13 April 2007. London: Society for Underwater Technology 202 : 83-90. German CR, Yoerger DR, Jakuba M, Shank TM, Langmiur CH, Nakamura K. 2008. Hydrothermal exploration with the Autonomous Benthic Explorer. Deep-Sea Research I. 55: 203-219. Green J. 2004. Maritime archaeology: A technical handbook. Second edition. San Diego USA: Elsevier Academic Press. Grizzle RE, Brodeur MA, Abeels HA, and Greene JK. 2008. Bottom habitat mapping using towed underwater videography: subtidal oyster reefs as an example application. Journal of Coastal Research. 24(1): 103–109. Harrold C, Light K, Lisin S. 1998. Organic enrichment of submarine-canyon and continental- shelf benhic communities by macroalgal drift imported from nearshore kelp forests. Limnology and Oceanography. 43(4): 669-678. Hulburt EO, 1945. Optics of distilled and natural water. Journal of the Optical Society of America. 35(2): 698-705. Humphris SE, Fornari DJ, Scheirer DS, German CR, Parson LM. 2002. Geotectonic setting of hydrothermal activity on the summit of Lucky Strike Seamount (37°17´N, Mid-Atlantic Ridge) Geochemistry Geophysics Geosystems. 3(8): 1049, doi:10.1029/2001GC000284. HURL Pisces V specifications (Retrieved February 7 2010) http://www.soest.hawaii.edu/HURL/pisces_V_specs.html Huvenne VAI, Beyer A, de Haas H, Dekindt K, Henriet J-P, Kozachenko M, Roy K.O-L, Wheeler AJ, TOBI/Pelagia 197, CARACOLE. 2005. The seabed appearance of different coral bank provinces in the Porcupine Seablight, NE Atlantic: results from the sidescan sonar and ROV seabed mapping, in Freiwald AB and Roberts JM (eds)., Cold-Water Corals and Ecosystems. Berlin, Heidelberg., Springer-Verlag: 359-392.  97 IFREMER fleet. (Retrieved February 7 2010). http://www.ifremer.fr/fleet/systemes_sm/engins/nautile.htm Jago research submersible. (Retrieved February 7 2010). http://npolar.no/geonet/pdf_files/Info_JAGO.pdf James HR, Birge EA. 1938. A laboratory study of the absorption of light by lake waters. Wisconsin Academy of Sciences, Arts, and Letters. 31(1): 1-154. (cited in Wetzel 2001). Jamstec 7000 m class remotely operated vehicle Kaiko 7000II. (Retrieved January 20 2009). http://www.jamstec.go.jp/e/about/equipment/ships/kaiko7000.html Jamstec manned research submersisble SHINKAI 6500. (Retrieved February 7 2010). http://www.jamstec.go.jp/e/about/equipment/ships/shinkai6500.html Janesick JR. 2001. Scientific charge-coupled devices. The Society for Photo-optical Instrumentation Engineers. Bellingham, Wa., USA. Jerlov NG. 1968. Optical Oceanography. New York (NY): Elsevier. 164 p. (Cited in McFarland 1986) Johnson, S.W., M.L. Murphy, D.J. Csepp. 2003. Distribution, habitat, and behavior of rockfishes, sebastes spp., In near shore waters of southeastern Alaska: observations from a remotely operated vehicle. Environmental Biology of Fishes. 66: 259-279 Johnson sea link submersibles Harbour Branch. (Retrieved February 7 2010). http://www.fau.edu/hboi/OceanTechnology/OTsubops.php Jones DOB, Bett BJ, Tyler PA. 2007. Megabenthic ecology of the deep Faroe-Shetland channel: a photographic study. Deep-Sea Research I. 54: 1111-1128. Juniper SK, Tunnicliffe V, Southward EC. 1992. Hydrothermal vents in turbidite sediments on a Northeast Pacific spreading centre: organisms and substratum at an ocean drilling site. Canadian Journal of Zoology. 70: 1792-1809. Kostylev VE, Todd BJ, Fader GB, Courtney RC, Cameron GDM, Pickrill RA. 2001. Benthic habitat mapping on the Scotian Shelf based on multibeam bathymetery, surficial geology and sea floor photographs. Marine Ecology Progress Series 219: 121-137.  98 Lauth RR, Ianelli J, Wakefield WW. 2004a. Estimating the size selectivity and catching efficiency of a survey bottom trawl for thornyheads, sebastolobus spp. using a towed video camera sled. Fisheries Research. 70: 27-37. Lauth RR, Wakefield WW, Smith K. 2004b. Estimating the density of thornyheads, sebastolobus spp. using a towed video camera sled. Fisheries Research. 70(1) (11): 39-48. Lirman D, Gracias NR, Gintert BE, Gleason ACR, Reid RP, Negahdripour S, Kramer P. 2007. Environmental monitoring and Assessment. 125:59-73. Love MS, Yoklavich M. 2008. Habitat characteristics of juvenile cowcod, Sebastes livis (Scorpaenidae), in Southern California. Envireonmtnal Biology of Fishes. 82: 195-202. Love MS, Yoklavish M, Schroeder DM. 2009. Demersal fish assemblage in the Southern California Bight based on visual surveys in deep water. Environmental Biology of Fishes. 85: 55-68. Mauffret A, Leroy S, Vila J, Hallot E, Lepinay BM, Duncan RA. 2001. Prolonged magmatic and tectonic development of the Caribbean igneous province revealed by a diving submersible survey. Marine Geophysical Researches. 22: 17-45. McEwen A, Hansen, Bridges N, Delamere WA, Eliason E, Grant5 J, Gulick V, Herkenhoff K, Keszthelyi L, Kirk R, Mellon M, Smith P, Squyres S, Thomas N, and Weitz C, LPL, University of Arizona, JPL, Ball Aerospace and Tech. Corp., USGS, CEPS, Smithsonian Ins., NASA Ames/SETI, University of Colorado, Cornell University, University of Bern, Switzerland, PSI/NASA HQ. 2003. MRO’s high resolution imaging science exereiment (HiRISE): science expectations. Sixth International Conference on Mars, Pasadena Ca. McFarland WN. 1986. Light in the Sea-correlations with the behaviors of fishes and invertebrates. American Zoologist. 26: 389-401. Melchert B, Devey CW, German CR, Lackshewitz KS, Seifert R, Walter M, Mertens C, Yoerger DR, Baker ET, Paulick H, Nakanura K. 2008. First evidence for high-temperature off- axis venting of deep crustal mantle heat: the Nibelungen hydrothermal field, southern Mid-Atlantic Ridge. Earth and Planetary Science Letters. 275: 61-69. Mertens LE. 1970. In-water photography. Wiley-Interscience, New York, (NY). 464 p.  99 MIR submersibles. (Retrieved February 7 2010). http://www.deepoceanexpeditions.com/MIR_sub.pdf Mosher DC, Austin Jr. JA, Fisher D, Gulick SPS. 2008. Deformation of the Northern Sumatra Accretionary Prism from high-resolution seismic reflection profiles and ROV observations. Marine Geology. 252: 89-99. Mullins G, Bird J. 2007. 3D sidescan with a small aperture: Imaging microbialites at pavilion lake. Oceans 2007. September 29 2007. p. 1-6. Newman JB, Gregory TS, Howland J. 2008. The development of towed optical and acoustical vehicle systems and remotely operated vehicles. In Archeological Oceanography. ed. R. D. Ballard. New Jersey: Princeton University Press. p 15-29. Nuytco Research products DeepWorker 2000. (Retrieved February 7 2010) http://www.nuytco.com/products/subs.shtml# Oceaneering Hydra Minimum. (Retrieved January 7 2009) http://www.oceaneering.com/oceandocuments/brochures/rov/ROV%20- %20Minimum.pdf Parry DM, Nickell LA, Kendall MA, Burrows MT, Pilgrim DA, Jones MB. 2002. Comparison of abundance and spatial distribution of burrowing megafauna from diver and remotely operated vehicle observations. Marine Ecology Progress Series. 244: 89-93. Parry DM, Kendall MA, Pilgrim DA, Jones MB. 2003. Identification of patch structure within marine benthic landscapes using a remotely operated vehicle. Journal of Expermental Marine Biology and Ecology. 285-286: 497-511. Pereyra MA, Jacoby DG. 2008. An interative SNR Estimation algorithm for Wiener Deconvolution of self-similar images distorted by camera shake blurring. 8 th  WSEAS International Conference on Signal, Speech and Image Processing. Santander, Spain: p 97-100. Piepenburg D, Schmid MK. 1997. A photographic survey of the epibenthic megafauna of the Arctic Laptev Sea Shelf: distribution, abundance, and estimates of biomass and organic carbon demand. Marine Ecology Progress Series. 147:63-75.  100 Reed JK, Shepard AN, Koenig CC, Scanlon KM, Gilmore RG. 2005. Mapping, habitat characterization, and fish surveys of the deep-water Oculina coral reef Marine Protected Area: a review of historical and current research, in Freiwald A and Roberts JM. (eds). Cold-water corals and ecosystems. Berlin, Heidelberg. Springer-Verlog. p443-465. Rooper CN, Boldt JL, Zimmermann M. 2007. An assessment of juvenile Pacific Ocean perch (sebastes alutus) habitat use in a deepwater nursery. Estuarine, Coastal and Shelf Science. 75: 371-80. Rosenkranz GE, Byersdorfer SC. 2004. Video scallop survey in the eastern Gulf of Alaska, USA. Fisheries Research. 69: 131-40. Rosenkranz GE, Gallager SM, Shepard RW, Blakeslee M. 2008. Development of a high-speed, megapixel benthic imaging system for coastal fisheries research in Alaska. Fisheries Research. 92: 340-344. Rossi S, Tsounis G, Orejas C, Padson T, Gili J-M, Bramanti L, Teixido N, Gutt J. 2008. Survey of deep-dwelling red coral (Corallium rubrum) populations at Cap de Creus (NW Mediterranean). Marine Biology. 154: 533-545. Sanchez FA, Serrano A, Ballesteros MG. 2009. Photogrammetric quantitative study of habitat and benthic communities of deep Cantabrian Sea hard grounds. Continental Shelf Research. 29: 1174-1188. Schleyer MH, Heikoop JM, Risk MJ. 2006. A benthic survey of Aliwal Shoal and assessment of the effects of a wood pulp effluent on the reef. Marine Pollution Bulletin. 52: 503-514. SEAmagine details Triumph 3. (retrieved February 7 2010) http://www.seamagine.com/triumph_d.html Singh H, Adams J, Mindell D, Foley B. 2000. Imaging underwater for archaeology. Journal of Field Archaeology. 27(3) : 319-328.  Singh H, Armstrong R, Gilbes F, Eustice R, Roman C, Pizarro O, Torres J. 2004. Imaging coral: Imaging coral habitats with the Seabed AUV. Subsurface Sensing Technologies and Applications. 5(1): 25-42.  101 Spencer ML, Stoner AL, Ryer CH, Munk JE. 2005. A towed camera sled for estimating abundance of juvenile flatfishes and habitat characteristics: Comparison with beam trawls and divers. Estuarine, Coastal and Shelf Science. 64: 497-503. Stein DL, Felley JD, Vecchione M. 2005. ROV observations of benthic fishes in the Northwind and Canada Basins, Arctic Ocean. Polar Biology. 28: 232-237. Sumida PYG, Bernardino AF, Stedall VP, Glover AG, Smith CR. 2008. Temporal changes in benthic megafaunal abundance and composition across the West Antarctic Peninsula Shelf: results from video surveys. Deep-Sea Research II. 55: 2465-2477. Tahoe Environmental Research Center. 2009. Tahoe: state of the lake report 2009. UCDavis. (Retrieved August 19 2009). http://terc.ucdavis.edu/stateofthelake/StateOfTheLake2009.pdf Tam AC, Patel CKN. 1979. Optical observations of light and heacy water by laser optoacoustic spectroscopy. Applied Optics. 18: 3348-3358. Tappin DR, Watts P, McMurty GM, Lafoy Y, Matsumoto T. 2001. The Sissano, Papua New Guinea tsunami of July 1998 – offshore evidence on the source mechanism. Marine Geology. 175: 1-23. Trenkel VM, Francis CRIC, Lorance P, Maheves S, Rochet M-jr, Tracey DM. 2004. Availability of deep-water fish to trawling and visual observation from a remotely operated vehicle (ROV). Marine Ecology Progress Series. 284: 293-303. Vine AC. 1975. Early history of underwater photography. Oceanus. 18(3): 2-10. Vinogradov GM.  2005. Vertical distribution of macroplankton at the Charlie-Gibbs Fracture Zone (North Atlantic), as observed from the manned submersible “MIR-1”. Marine Biology. 146: 325-331. Waddington T, Hart K. 2003. Tools and techniques for the acquisition of estuarine benthic habitat data. Newport (RI): Science Applications International Corporation. 63 p. Ward C, Ballard RD. 2004. Deep-water archaeological survey in the black sea: 2000 season. The International Journal of Nautical Archaeology. 33(1): 2-13.  102 Wetzel RG. 2001. Limnology Lake and River Ecosystems. 3 rd  ed. San Diego (Ca): Elsevier. 1006 p. Wilson SJK, Fredette TJ, Germano JD, Blake JA, Neubert PLA, Carey DA. 2009. Plan-view photos, benthic grabs, and sediment-profile images: Using complementary techniques to assess response to seafloor disturbance. Marine Pollution Bulletin. 59: 26-37. White SN, Humpris SE, Kleinrock MC. 1998. New observations on the distribution of past and present hydrothermal activity in the TAG area of the Mis-Atlantic Ridge. Marine Geophysical Researches. 20: 41-56. Woods Hole Oceanographic Institution Alvin Specifications. (Retrieved February 7 2010). http://www.whoi.edu/page.do?pid=8422 Woods Hole Oceanographic Institution Nereus Specifications. (retrieved January 20 2010). http://www.whoi.edu/page.do?pid=10822 Yoerger DR, Jakuba M, Bradley AM, Bingham B. 2007. Techniques for deep sea near bottom survey using an autonomous underwater vehicle. In: Thrun S, Brooks R, Durrant-Whyte H. editors. Robotics Research, STAR 28. Berlin: Springer-Verlag. P 416-429.  103 APPE
G PROCEDURES This appendix material is provided as a guide to detail the specifics of using UBC-Gavia. It will enable users to effectively use Gavia for photographic surveys by presenting the necessary information to modify the Gavia system to achieve the results that are presented in the current work. The Appendix will be inserted into the UBC-Gavia operations manual. Included in this supplementary material are 7 sections detailing assembly/ disassembly of the camera lens, additional lens calculations, lens adjustments, image scale calibration, flight path optimization script, camera configuration, recommended settings and additional example images.   104 A.1 ASSEMBLY/ DISASSEMBLY OF CAMERA SYSTEM To adjust the lens on UBC-Gavia some disassembly/assembly is required of the nose module. Complete nose cone module removal should only be necessary to service the CCD hardware. For all lens adjustments described below it is recommended to leave the nose module attached to the control and battery modules (or power source) so as to keep the system operational between adjustments. This procedure is performed with the thruster module not attached so that the cross sectional surface of can be used to keep the vehicle in an upright position on a flat surface. This allows the camera to be orientated parallel with a wall for camera calibrations. To access the camera lens remove the black metal ring holding the viewport plate glass in place using a flat tool that spans the viewport diameter and fits into the two slots on either side of the ring. Support the circular plate glass under the black metal ring as it is removed as this is now unsupported. Under the plate glass is an o-ring (Figure 30). Ensure this o-ring is lubricated and lens surfaces are clean upon re-assembly. The lens can be removed using needle nosed pliers pushed into two slots on either side of the lens on the inside of the lens barrel. The lens threads into the CCD board by a standard 25 mm diameter C-mount type lens connector. Rotate the lens until it is free from its mount surface on the CCD board. Once the lens is removed it can be adjusted to suit mission requirements this process in detailed in section 107A.3.   105  FIGURE 30 NOSE CONE LENS REMOVAL A) Threaded ports(4) for securing tow hook (this is optional to remove); B) Lens; C) O-ring seat channel.   106 A.2 ADDITIO
S  Image circle diameter: This equation describes the dimensions of the light image projected by the lens onto the image plane or CCD sensor. This measure is important when considering the lens/ sensor combination and will be needed if there to ensure any replacement lenses will be suitable for the Gavia CCD sensor. It is calculated as,   ) 2 tan(2 α fDic =     (5.1) where f is the focal length, and α is the angle of view.    107 A.3 LE
TS Once the lens is removed from the viewport two manual adjustments are available; aperture and focus (Figure 31). The lens barrel is also equipped with two set screw (Figure 31). One set screw secures the focus adjustment and one secures the aperture adjustment. Following are adjustment protocols for aperture and focus.  FIGURE 31 THE LENS REMOVED FROM CCD CAMERA. Facing the top to the photograph is the lens face; at the bottom is the C-mount thread. A) aperture adjustment. B) set screw. C) focus adjustment. Note: a second set screw which has been removed, is located directly below the one shown..  APERTURE The aperture adjustment ring should be locked until the set screw is loosened on the side of the lens barrel. The aperture adjustment ring is the one closest to the threaded C-mount end of the lens (Figure 31). By looking into the front of the lens the aperture can be seen opening and  108 closing. By opening the aperture to its maximum diameter the greatest amount of light will be transmitted to the CCD. Note: changing the aperture after the focus is set can alter the focus.  FOCUS The focus adjustment ring is located towards the front of the lens in respect to the aperture adjustment ring (Figure 31). Focus can be delicate in that small changes may have a large effect on image clarity. A clear focus will be easier to obtain with a smaller aperture. When the aperture is open wide the image will only be focused within a narrow range of distances from the lens due to depth of field restrictions (the depth of field can be calculated with the provided script in section A.5). The focus distance is chosen based on the desired flight altitude which must be with in the depth of field of the lens configuration. Due to the frequent changes and need to confirm the impact of those changes focusing of the lens is best preformed with Gavia powered up. This requires several precautions, especially when removing and replacing the lens. Care should be taken to eliminate static charge and risk of dropping instruments onto the live circuits inside the open nose cone. The nose module can be pointed towards a vertical surface, which has some texture by which to compare the clarity of different focus increments (a vertical meter stick for instance). With the planned mission objectives in mind Gavia can be measured out a certain distance from the vertical surface that corresponds to the corrected underwater flight path altitude. The corrected focus distance takes into account the magnification properties of the underwater system due to the index of refraction between the camera and the imaged features. The correction factor is 76 % of the desired flight path altitude. For example, if the desired flight path altitude is 3 meters the lens should be focused in air for 2.28 m as this is the apparent underwater distance due to magnification (assuming and index of refraction of 1.33 for water). Once the lens has been adjusted and  109 reinstalled, the camera can be activated through the control center. It is recommended to leave the camera active for at least 20 seconds before deactivating again to avoid freezing the processor. The images can then be browsed in an internet browser using the camera’s IP address ( Viewing the images allows the user to gauge the progress of the focus changes. To speed up the process of focusing, after each adjustment a record should be kept of which direction the lens focus ring was turned. Each consecutive adjustment after the image clarity improves should be made at small increments (in the order of 1/32 of a revolution). Once the lens is focused image scale can be determined, this is detailed in the following section.  110 A.4 IMAGE SCALE CALIBRATIO Determining image scale is beneficial in that it allows the user to measure length scales of imaged features. If the camera and lens details are available image scale can be calculated. However to confirm the pixel length scale or in cases where the lens camera system details are not known image scale can be measured manually. The image scale can be determined by calibrating the images with a fixed known scale such as a meter stick. After the camera has been focused, the known fixed scale can be positioned perpendicular to the lens at marked distances from the lens. Typically the known scale reference (meter stick) was moved in 0.5 m increments between 0.5 m and 3 m from the lens. With free software such as ImageJ (http://rsbweb.nih.- gov/ij/) a built in ruler feature can be used to measure the known distance on the meter stick, from this the software can automatically generate a pixel length scale. This is repeated for the different distances at which the known reference scale was placed. Afterwards any object in an image with a known altitude can be measured by pixel count or a ruler tool (ImageJ) and multiplied by the known pixel length scale. For accurate measurements of features the air/water interface magnification factor must also be taken into account similar to the correction factor needed for in-air focus distances discussed previously.  111 A.5 MISSIO
S On the following page is a MATLAB script which will calculate the depth of field, hyperfocal distance, and the corrected in-air focus distance for a given flight altitude. The hyperfocal distance is the distance at which a lens can be focused to yield a maximum theoretical depth of field. The depth of field can be calculated through determining the hyperfocal distance and the near and far distance of acceptable sharpness as,  H f = f 2 f #( )c + f ,     (5.2) fsH fHs D f f n 2 )( −+ − = ,     (5.3) sH fHs D f f f − − = )( ,     (5.4) Dn is the near distance to lens of acceptable sharpness, Df is the far distance from lens of acceptable sharpness, f is the lens focal length, f# is the f number, c is the circle of confusion, and s is the altitude or focus distance.  The script will calculate these parameters when the following lens values are input: CoC, focal length, f#, and flight path altitude. The specific lens parameters for UBC-Gavia are noted within the script. The information calculated in the script will aid in determining an aperture setting which will best meet the altitude requirements due to the environmental conditions of a given mission. For example if altitude is expected to be variable, a smaller aperture is preferred as this will produce the largest depth of field. Conversely, if light is limited a larger aperture is necessary to allow  112 sufficient light to reach the sensor, and thus flight altitudes most be constrained with in the smaller depth of field.     113 close all clear all %%Lens mission parameters %Created by Weston Pike, March 18 2010  %%This script will calculate useful lens configuration information and limits based on user input of basic lens %parameters and desired flight path altitudes. The script calculates the in-air focus distance to which the lens %should be focused for the specified mission altitude and gives a range of distances within which objects will %remain in focus. It also displays the hyperfocal distance. When a lens is focused to its hyperfocal distance its %depth of field is maximized. The hyperfocal depth of field will extend from half the hyperfocal distance to %infinity.  %The F number is focal length/aperture diameter. When looking into the UBC-Gavia lens the widest %aperture diameter setting is 5mm. when decreasing the shutter estimations for aperture diameter and %thus F number can be made based on this. %Equations from Greenleaf, Allen R., Photographic Optics, The MacMillan Company, New York, 1950.  disp('For the Gavia camera system:') disp('Recommended circle of confusion, 0.004 mm; F number range, F1.2 – F16; focal length, 6 mm.') disp('Enter the following lens parameters:')  ue1=input('Enter circle of confusion (mm): ');  ue2=input('Enter focal length (mm): ');  ue3=input('Enter F number: ');  ue4=input('Enter flight path altitude (m): ');  disp(' '); format short g airF=ue4*.76; hypf=(ue2^2/(ue3*ue1))+ue2; disN=airF*(hypf*.001-ue2*.001)/(hypf*.001+(airF)-2*ue2*.001); disF=airF*(hypf*.001-ue2*.001)/(hypf*.001-airF); dof=disF-disN; disp('In-air focus distance (m): '); disp(ue4*.76); disp('Near distance of acceptable sharpness when focused for specified flight altitude (m):'); disp(disN); disp('Far distance of acceptable sharpness when focused at specified flight altitude (m):'); disp(disF); disp('Total depth of field (m):'); disp(dof); disp('Hyperfocal Distance (m):'); disp(hypf*.001); disp('Nearest objects in focus if lens is focused at the hyperfocal distance (m):'); disp(hypf*.001/2);    114 A.6 CAMERA CO
FIGURATIO The preceeding sections detailed the pre-mission optimization of the camera system. This section details the steps needed to change the camera systems software configuration. The camera settings are stored in an xml-configuration file that can be accessed through a terminal emulator such as puTTy (http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html). Below instructions for this process are provided. Note: these instructions are specific to UBC-Gavia.  PuTTy Log in: Login name: root Login Password: gaviaauv Directory: cd/var/iac/config/ Camera configuration file: vi camera.xml Changes can be made once this file is opened by pressing “i” Press “ESC” and then “:x “when changes are complete Enter: “less camera.xml”  Double check changes are correct with no formatting errors Entering  “q” ends session After this process enter: “service crew stop”  Wait for system to confirm that crew is stopped and then enter the same command  a second time to ensure stop is successful Enter: “service crew start”  All readings should be green for ok  Once the camera xml file is accessed there are a number of settings available to manipulate. Some settings such as shutter, gain, and exposure can only be manipulated after a “0” for manual  115 is input to change the corresponding parameter mode. A 1 in the parameter mode of these settings will cause the corresponding parameter to function automatically using the manufacturer’s algorithms to adjust to changing conditions. The parameter mode is listed as the parameter named followed by “auto”. Below, additional details for the parameters found in the configuration file are provided.. Format and mode: These two values act in conjunction to set the image dimensions along with pixel bit depth and other special modes. In my experience UBC-Gavia seems to limit usable modes, thus many will silently fail. It is recommended to stick with the pre set combination. Transform: This setting allows the user to choose from 3 image modes, none(0), grayscale downsample(1), gamma(2) and colour downsample(3) Frame rate: The frame rate will silently fail if it is not set to excepted values in frames per second (fps), these are 1.875, 3.75 and 7.5. A setting of 15 fps may also be possible depending upon the format/mode combination. Brightness: This function raises the light level of the entire image. As a result all aspects of the image will be brighter including black. This is a useful setting to use when working in very low light conditions. This function should be used with care as it does have a tendency to reduce image contrast causing images to appear washed out. Essentially the setting controls the amount of black in an image a high setting will reduce black. The units for this setting are in percent. The percent value represents a percent increase of the A/D converters minimum digital number. The input values for this parameter can be from 1 to 100.  116 Exposure: This setting automatically adjusts the gain and shutter values to increase or decrease image light level. If either the gain of shutter settings have been set to manual this setting will not have an effect. The exposure setting may be a quick way to improve images in some cases. However in more demanding environments it is generally more effective to individually control the gain and shutter settings. This parameter is controlled by changing the “exposure value” which may be a value from 1 to 2. Shutter: This setting controls the length of time the sensor collects light. The advantage of manipulating this setting (along with aperture) above the others is that there is no digital processing involved allowing for more clarity in the resultant image by reducing noise levels. A mission constraint to consider for this setting is the vehicles velocity. When imaging in low light conditions it is advantageous to slow the vehicle down as much as possible so as to reduce motion blur associated with slower shutter speeds. This parameter is altered by entering a time interval in seconds. The shutter range is 0.03ms to 533ms (note. can depend on frame rate) Gain: The camera gain setting deals with how photon hits are recorded, these can be amplified if each electron being moved by a photon is counted with a scaled up factor. This increases the camera response to the incident light. Brightness is a different aspect of this in that it increases the signal recorded by the camera, gain increases the signal directly from the CCD sensor. This is a useful setting for very low light conditions, but is overwhelming in conditions where ambient light is higher. When imaging in low light conditions it is recommended to change this setting as it greatly increases the ability to gather usable images. However, with a high gain there will be significant noise in the image when light is limited. A setting of 17dB gain seems to be the maximum setting for the UBC-Gavia camera before the image begins to significantly deteriorate due to noise. The maximum setting is 25.9 dB, minimum -10  117 White balance: This setting allows different colours to be non linearly amplified. Generally post processing is used, as essentially the same effect can be generated. Gamma: Controls the linearity of the displayed info. During most imaging missions this setting remains on 1 which is gamma off. Gamma adjustments can generally be more effective in post processing. Inputs are accepted between 0.5 and 4. Jpeg quality: This setting controls the compression of the images and can be 0-100. 100 is uncompressed. This setting may be useful if image storage is limited.  118 A.7  UBC-GAVIA RECOMME
GS In the most challenging imaging environments the automatic functioning of the camera system can become problematic. From extensive photo surveys in different conditions optimal settings were developed to overcome this. These settings have performed well in a number of light limited conditions and appear to be robust in dealing with strobe limited images, dependent on an underpowered strobe output. The lessons learned in these conditions are detailed below for each important camera configuration parameter. Following this discussion are a number of images with a list of their settings. These will aid in comparing the effectiveness of different camera settings well enabling the identification of problematic image features. Shutter: Shutter values are best left on auto when there is ambient light available. When imaging in strobe dependent low light environments the shutter integration time can be set to a typical value of 0.01 s. If the vehicle speed is reduced this setting may be increased to 0.04 s. When imaging with ambient light (where ambient light values is significantly greater than strobe light) this setting is best left on auto. Manual settings suggested here will cause images to be over exposed in ambient light. Gain: Allows the user to control the amount of amplification the A/D converter applies to a photon count (current value). In ambient light conditions this setting can be left on manual. When flying close (<2m) or in very clear water it may also left on manual. However, in more challenging light limiting condition gain is most effective set manually set at 17 dB. In light limited environments a setting above 17 dB causes noise in images which detrimentally affects the image quality. Images with too much gain can be recognized by a general graininess along with lighter bar artifacts which can be seen running diagonally through the image.  119 Brightness: Brightness settings may be useful in some situations to brighten images in light limited environments. If images appear to be washed out this settings should be reduced along with gain and shutter.   120 A.8 ADDITIO
AL IMAGE EXAMPLES For further reference, displayed below are selections of images which show various image quality issues, this can be compared to their associated settings. For some of the images diagnostic notes are provided identifying the issues with the images and rectification strategies. The first two images are mosaics from Kelly lake which represent good camera settings but demonstrate the negative impact higher turbidity can have in attenuation of light signal and associated colour loss.   121 altitude 4.5697 depth 47.7451 lat 5052.1834N lon 12144.5187W surge 1.541 brightness 0 exposure 0.176636 gain 25.9161 gamma 1 shutter 0.0401893 white-balance_bu 89 white-balance_rv 79 filename  200806251734 frame001074_0.jpg jpeg_quality 90  jpeg_quality  NOTE Altitude too high/ surge is fast/ high gain/ slow shutter  / low exposure   altitude 3.12791 depth 53.4006 lat 5052.1069N lon 12144.5598W surge 1.555 brightness 0 exposure 1.12585 gain 25.9161 gamma 1 shutter 0.0401893 white-balance_bu 89 white-balance_rv 79 filename  200805312020 frame001109_0.jpg jpeg_quality 90  jpeg_quality  Note: Altitude too high/ fast/ high gain/ slow shutter  altitude 2.80087 depth 52.4863 lat 5052.1419N lon 12144.5410W surge 1.593 brightness 0 exposure 1.49719 gain 25.668 gamma 1 shutter 0.0401893 white-balance_bu 89 white-balance_rv 79 filename  200805312020 frame001191_0.jpg  jpeg_quality 90  jpeg_quality  Note: too fast/ high gain    122 altitude 2.82147 depth 53.3866 lat 5052.1068N lon 12144.6138W surge 1.545 brightness 0 exposure 0.977295 gain 25.9161 gamma 1 shutter 0.0401893 white-balance_bu 90 white-balance_rv 80 filename  200806251519 frame000297_0.jpg jpeg_quality 90  jpeg_quality    altitude 2.67465 depth 53.6769 lat 5052.0188N lon 12144.6636W surge 1.549 brightness 0 exposure 0.266785 gain 25.9161 gamma 1 shutter 0.0401893 white-balance_bu 90 white-balance_rv 80 filename  200806251519 frame000507_0.jpg jpeg_quality 90  jpeg_quality   altitude 5.42502 depth 50.6416 lat 5052.0997N    lon 12144.6178W surge 1.562 brightness 0 exposure 0.691162 gain 25.9161 gamma 1 shutter 0.0401893 white-balance_bu 89 white-balance_rv 79 filename  200806251542 frame000312_0.jpg jpeg_quality 90  jpeg_quality   altitude 5.0174 depth 0.249178  lat 5052.5420N  lon 12145.0138W surge 0.582 brightness 0 exposure 1.45947 gain 0 gamma 1 shutter 0.00456434 white-balance_bu 90 white-balance_rv 80 filename  200806260916 frame000214_0.jpg jpeg_quality 90  jpeg_quality     123 altitude 3.11649 depth 52.9264 lat 5051.9743N   lon 12144.5516W surge 1.256 brightness 0 exposure 0.279236 gain 25.9161 gamma 1 shutter 0.0401893 white-balance_bu 90 white-balance_rv 80 filename  200806261754 frame000857_0.jpg jpeg_quality 90   altitude 2.50883 depth 53.8266 lat 5052.1140N lon 12144.5545W surge 1.271 brightness 0 exposure 1.20947 gain 25.9161 gamma 1 shutter 0.0401893 white-balance_bu 90 white-balance_rv 80 filename  200806271545 frame000292_0.jpg jpeg_quality 90    altitude 2.42105 depth 53.3589 lat 5052.1893N  lon 12144.6837W surge 1.225 brightness 0 exposure 0.431885 gain 25.9161 gamma 1 shutter 0.0101894 white-balance_bu 90 white-balance_rv 80 filename 200806271626 frame000903_0.jpg jpeg_quality 90     altitude 2.80115 depth 53.4744 lat 5052.1021N lon 12144.6347W surge 1 brightness 0 exposure 0.955566 gain 16.995 gamma 1 shutter 0.0101894 white-balance_bu 90 white-balance_rv 80 filename  200806271706 frame000251_0.jpg jpeg_quality 90    altitude 2.65628 depth 53.7697 lat 5051.9767N lon 12144.5543W surge 1.226 brightness 0 exposure 0.337036 gain 25.9161 gamma 1 shutter 0.00493932 white-balance_bu 90 white-balance_rv 80 filename  200806271741 frame000431_0.jpg  jpeg_quality 90   124  altitude 2.68465 depth 53.699 lat 5052.0922N  lon 12144.6136W surge 1.236 brightness 0 exposure 1.59991 gain 25.9161 gamma 1 shutter 0.0401893 white-balance_bu 90 white-balance_rv 80 filename  200806271809 frame000308_0.jpg jpeg_quality 90    altitude 2.08544 depth 53.9183 lat 5052.1454N lon 12144.7203W surge 1.254 brightness 0 exposure 1 gain 16.995 gamma 1 shutter 0.0101894 white-balance_bu 90 white-balance_rv 80 filename   200806281124 frame000540_0.jpg jpeg_quality 100    altitude 2.18479 depth 53.7279 lat 5052.1157N    lon 12144.7225W surge 1.222 brightness 0 exposure 2 gain 16.995 gamma 1 shutter 0.0100145 white-balance_bu 90 white-balance_rv 80  filename  200806281301 frame001217_0.jpg jpeg_quality 100  jp     altitude 2.07326  a depth 53.6133 lat 5052.1417N    lon 12144.7302W surge 1.225 brightness 0 exposure 2.15869 gain 16.995 gamma 1 shutter 0.0100145 white-balance_bu 90 white-balance_rv 80 filename  200806281327 frame002056_0.jpg  filename jpeg_quality 100       125 altitude 2.24694 depth 53.967 lat 5052.0984N lon 12144.6382W surge 1.246 brightness 0 exposure 2 gain 16.995 gamma 1 shutter 0.0100145 white-balance_bu 90 white-balance_rv 80 filename  200806300757 frame001237_0.jpg jpeg_quality 100   altitude 2.26343 depth 54.0185 lat 5052.0840N lon 12144.6271W surge 1.22 brightness 0 exposure 2 gain 16.995 gamma 1 shutter 0.0100145 white-balance_bu 85 white-balance_rv 83 filename  200806300845 frame004250_0.jpg peg_quality 100   altitude 2.03322 depth 54.3125 lat 5052.0544N lon 12144.4590W surge 1.118 brightness 0 exposure 2 gain 16.995 gamma 1 shutter 0.0100145 white-balance_bu 90 white-balance_rv 80 filename  200806301051 frame003416_0.jpg jpeg_quality 100   altitude 2.50198 depth 53.3991 lat 5052.1140N    lon 12144.6416W surge 1.223 brightness 2.00195 exposure 2 gain 16.995 gamma 1 shutter 0.0100145 white-balance_bu 90 white-balance_rv 80 filename  200907110512 frame000374_0.jpg jpeg_quality 100  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items