Lab Visibility

Generation of atmospheric visibility maps from webcam imagery.

Introduction

In geography, viewshed analysis is a commonly applied technique to determine visible locations from an observer position taking into account obstacles from a digital elevation model (DEM). Applications of viewshed modeling include visual pollution assessment as well as network planning and design for the telecommunications industry. Although viewshed models provide important information, these are limited by the assumption of clear sky conditions, discarding highly spatiotemporal weather events that may lead to low visibility such as rain, fog, smog, and snow. This reduction of visibility caused by weather phenomena is known as atmospheric visibility, and it refers to the maximum observable horizontal distance at which objects and features are perceptible from a particular location depending on the degree of air transparency, and it is also known as the meteorological optical range. It serves as an indicator for air quality and it’s a crucial parameter in air traffic control and road safety. Thus, the timely availability of these measurements is relevant for environmental and weather-related stakeholders.

Atmospheric visibility is traditionally monitored by means of visibilimeters, transmissometers, lidars, and human-based inspections. Despite all of these observational techniques, the instrument-based analyses are restricted to horizontal profiles hence misrepresenting the total visible area. On the other hand, human observers are prone to subjectivity and are unable to provide continuous automatic measurements. Recently, unconventional weather data sources have gained attention due to the promise of delivering very highly localized weather observations, including several opportunistic sensing devices such as personal weather stations, smartphones, drones, and cameras. In particular, dedicated weather webcams are able to provide real-time retrievals of local atmospheric conditions (WT Chu et al, 2017), and it has even been speculated that in situ camera-based weather data has the potential to be assimilated into high-resolution numerical weather prediction models (Aragon, 2021). This study presents a sparse method to estimate real-time atmospheric visibility from webcam imagery in combination with viewshed analysis and inverse distance weighting interpolation to generate atmospheric visibility maps. The study area is located in the Austrian-German border near Salzburg (fig 2).

Materials and methods

The workflow is presented in figure 1, consisting of main steps and various sub steps.

Figure 1. Methodology

1. Data source and preprocessing

Raw image

The camera is located near the Reichenhaller Haus and the Hochstaufen summit in Germany at an altitude of 1750m above sea level. It was installed by the German Alpine Club (DAV) and it has a wide view of several locations, including Salzburg, Piding, and Freilassing, as well as various mountains and lakes.

Cropping

The first step is to trim the raw image to a size 1000 x 500 pixels.

Masking

Next, it is essential to select specific locations for calculating atmospheric visibility. This step was achieved with the raster mask on the web-based graphics editing tool Photopea.

Tiling

Then, the image is divided into 250 x 250 pixels patches. This step is required to further process single locations for calculating a visibility score.

Figure 2. Camera range and selected locations. The area is delineated by Thiessen polygons to emphasize the area of influence per location.

2. Visibility calculation

The general idea behind atmospheric visibility estimation is to first pass a 3x3 sharpening kernel to the images followed by the calculation of a mean score of keypoints between a reference image and an image under analysis using the Scale Invariant Feature Transform (SIFT) (Lowe, 2004) in combination with the Oriented fast and rotated brief (ORB) method (Ethan Rublee et al, 2011). SIFT and ORB are able to extract keypoints by performing a series of image enhancement and blurring processes to identify edges, corners and points. For example, SIFT techniques include gaussian convolution, downsizing, nearest neighbor approximation, Hough transform voting, linear least squares, and Bayesian probability analysis. On the other hand, ORB is a faster alternative to SIFT but not as robust in scale variations. ORB finds corners by thresholding circular areas looking for contiguous sections of pixels using the Features from accelerated segment test (FAST) with a Harris corner filter. A visibility range from 0 (invisible) to 100 (visible) is derived by calculating a mean value from eq 1 and eq 2.

Eq. 1 SIFT = (K2 / K1) * 100

Eq. 2 ORB = (K2 / K1) * 100

where, K2 = number of keypoints in the image under analysis and, K1 = number of keypoints in the clear sky reference image. Examples of ORB and SIFT keypoints are shown in the next sidecar:

ORB: Clear sky

ORB: reduced visibility

SIFT: Clear sky

SIFT: Reduced visibility

The proposed method seizes the ability of SIFT and ORB to detect distinctive features. For this particular study, the areas where less keypoints are found in comparison to the reference image are considered as locations with reduced visibility. To estimate a visibility score for various depths or regions observable from the camera, the tiles of the selected locations: Fuderheuberg (1.65 km), Piding (5.07 km), Baggersee (8.61 km), Furstenbraunn (11.4 km), Freilassing (13.70 km) and Gaisberg (20.49 km) are analyzed through the previously described procedure. An example is provided in the next figure:

Gaisberg SIFT keypoints:

Reference (left): 89 keypoints

Comparison (right): 4 keypoints

Visibility score: 4.4%

Gaisberg ORB keypoints:

Reference (left): 1726 keypoints

Comparison (right): 7 keypoints

Visibility score: 0.4%

3. Viewshed and IDW

After deriving visibility scores for each location the next step is to perform a viewshed analysis, which is equivalent to the visible locations in a clear sky day (figure 3). Then, the visibility score values are interpolated using the IDW method with a power of 2.3 and a cell size of 10 meters. The results of the interpolation are presented in the next section.

Figure 3. Viewshed model = clear sky day (red = invisible & green = visible).

Results

Two days experiencing reduced atmospheric visibility are presented with their respective webcam image in fig 4 and 5.

Figure 4. Visibility map for September 30, 2019 at 18:10.

Figure 5. Visibility map for September 09, 2017 at 10:50 am.

Conclusion, limitations and further work

Image to map translation has previously been attempted to deliver an overhead map of the world (Saha et al., 2021). In addition, SIFT has been used for haze removal (Musunuri, Y.R. and Kwon, O, 2021). However, to the best of my knowledge, this is the first study generating atmospheric visibility maps from webcam imagery. Interestingly, keypoint detectors are able to perform reasonably well for this particular task. Selecting more locations for patch analysis would provide better results in the interpolation of the visibility values. For this lab assignment, selected images were taken in September at various times during 2016, 2017, 2018, and 2019. The reason of the chosen images is to avoid seasonal effects that may hamper the extraction of visibility values (e.g. shadows and snow). This limitation can be overcome simply by selecting a representative reference image for a particular time of the day or season. The end goal is to automatically generate these maps for thousands of locations where open access cameras are available and distribute them via an API in compliance with OGC standards. The automation of the workflow can be achieved with the combined use of OpenCV and ArcPy libraries. Further work aims to replicate this study using an open source GIS software and for multiple locations. In conclusion, webcams have the potential to become a rich data source for operational meteorology, accounting for microscale weather phenomena, with results that may benefit aviation and public safety.

References

A. Saha, O. M Maldonado, C. Russell, R. Bowden (2021). Translating Images into Maps. arXiv preprint.  https://arxiv.org/abs/2110.00966  

D. G. Lowe (2004). Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision.  https://doi.org/10.1023/B:VISI.0000029664.99615.94 

E. Rublee, V. Rabaud, K. Konolige and G. Bradski (2011) ORB: An efficient alternative to SIFT or SURF. IEEE International Conference on Computer Vision.  https://doi.org/10.1109/ICCV.2011.6126544 .

M. Aragon (2021). Ground-based cloud analysis for potential assimilation in high resolution NWP models (BSc thesis).  https://maxaragon.com/thesis.html 

Y.R. Musunuri, O. Kwon (2021). Haze Removal Based on Refined Transmission Map for Aerial Image Matching. Appl. Sci. 2021, 11, 6917.  https://doi.org/10.3390/app11156917 

W. T. Chu, X.Y. Zheng, D.S. Ding (2017). Camera as weather sensor: Estimating weather information from single images. Journal of Visual Communication and Image Representation, 46, 233–249.  https://doi.org/10.1016/j.jvcir.2017.04.002 

Figure 1. Methodology

Figure 2. Camera range and selected locations. The area is delineated by Thiessen polygons to emphasize the area of influence per location.

Figure 3. Viewshed model = clear sky day (red = invisible & green = visible).