Start of SpectroCam Time Series Bentelo Grassland.

Our * goal :

Support diary farmers to improve their management of cattle feed production by integrating farmers expert / historical knowledge with state of the art use of platforms such as drones, imaging sensors, image analysis and GIS.

Means:

Borre brs  ref. dronexpert.nl  ,are integrating our SpectroCam UAV version with their DJI configurable drones. They also set up a grassland test field near their location in Bentelo.

Method:

Through the growing season, about weekly at least fly the sensor at 20 m or 40 m. At Z= 20 m the spectrometer has a projected FOV width of 2 m across the direction of movement. At Z= 40 m the width is 4m. The forward speed of the drone in combination with spectrum integration time ensures spectral FOV_ground samples of 2mx2m or 4mx4m . Each spectral samples corresponds to about 32×32 pixels in the coaxial RGB image.  After correction of all images for radiometric errors (vignetting) the images or transformed into an (ortho)photo mosaic. The interface between SpecroCam and DJI M100 provides Latitude, Longitude, Height and Quaternion position and attitude info.

Location of Grass test-field near Bentelo with scan-pattern drone+SpectroCam.

As all data are referenced to the same UTM grid, we can and should add layers of relevant data including time series. The above Google picture hints at how the local brook was instrumental in forming the landscape. Geomorphology provides info on what to expect in terms of soil patterns and soil properties.

The test field was prepared starting as a bare field with 2 types of grass mixtures sowed. The planned 3 differences in manure application will provide a total of 6 main variants .

What can be learned from the first “bare soil”data set ?

RGB corrected > orthomosaic : this is used for testing the assumption of little variation in soil spectral reflectance. Spatial variation is estimated from likelihood for clusters in RGB space: the next figure shows a subset of the Mosaic with segment boundaries derived from k-means clustering with 5 classes.

Mosaic rot -45 deg, subsection with 5-mean clusters

Figure 2. k= 5, clusters in RGB space based on k-Means clustering.

Soil01 cluster has mean-RGB = [ 176 134 138 ]

Soil02  cluster has mean-RGB = [ 199 133 138].

Conclusion: the bare soil segments are quite homogeneous and only vary in the reflected photon flux in the red band. So the size and position of the spectral sample’s FOV on the ground is not critical.

 

How do clusters in RGB relate to clusters in the SpectroCam’s 1024 spectral channels ?

Figure 3. Clustering on model: if bare soil then error in polyfit 2nd order polynomial < 0.007, else likely mix of soil and vegetation reflection / absorption.

 

Using the m3xSpectroCam, basics.

M3X has started to apply the spectrometer+camera registered method to OceanOptics STS spectrometer. see ref. http://oceanoptics.com/product/sts-developers-kit/

The developers kit is based on the Raspberry Pi and the software environment is mostly Python and web based. This new programming environment stimulated a review of actual user requirements assuming most of our users are related to the field of remote sensing.

The current data acquisition interface is based on using the usb2000+ spectrometer, an RGB camera and the C# language. A screendump of the GUI is shown below:

Question: how black is black paint ?

Question: how black is black paint ?

The figure shows the inside of the STS case, next to a white reflectance target. The red rectangle indicates the FOV of the spectrometer.

The goal (or question) of the user is to know if the paint is absorbing photons over the range from about 400-1000 nm. > the user requires an estimate of the reflectance of the material surface (white nylon+ paint0) in the FOV of the spectrometer and an estimate about the accuracy of the reflectance.

To reach this goal the following functions or methods derived from subgoals are needed.

1. reflectance_relative_to_white = photon_flux_object / photon_flux_white .

1.1 photon_flux = photon_count / integration_time/ solid_angle, array

1.2 photon_flux_white, same type as 1.1 but from a different object

1.1.1 integration_time : stored in the specific data_record, milliseconds.

1.1.2 solid_angle : can be derived from the FOV rectangle , common to all data records.

1.1.3 photon_count = electron_count_photons / quantum_efficiency   : method divide array

1.1.3.1 quantum_efficiency :  can be determined by irradiance calibration :method calibrate array =  (electron_count_photons/integration_time) / (photon_count/integration_time) , same aperture for calibration and application.

1.1.3.2 electron_count_all = NLC(voltage_count_all) : Non_Linear_Correction method: spline, array

1.1.3.2.1 electron_count_all = voltage_edc + electron_count_dark + electron_count_photons: array .  electron_count_dark ,method: save data record, the aperture is closed, no photon flux, only thermal electron (flux).

1.1.3.2.2 electron_count_photons = EDC(electron_count_all) – EDC( electron_count_dark) : method EDC, method subtract electronic dark voltage, array

After reaching / solving all subgoals the main goal  reflectance_relative_to_white, array is solved using

methods: array.subtract , array.multiply, array.divide, array.spline ,

calibration data: EDC(index), NLC(spline coefficients),

reference data: stored Dark_Reference(electron_count, integration_time)

temporary data: (white)_Reference(electron_count, integration_time).

The model is only complete with:

Wavelenght(index) = Spline(index, coefficients) .

The aperture FOV is modelled as a rectangular slit with centre Xedge, Yedge at the 0.50 reflectance response of X or Y edges between dark and white objects.

Edge response 0.50 mix of black and white.

Edge response 0.50 mix of black and white.

If the dark reflectance is near 0, the 0.50 mix is found when the reflectance_mix =0.5

In the general case

reflectance_Z(X,Y,m) = m* X +(1-m)* Y ; probability , array;

Goal: find mixing ratio m : given X,Y,Z

Method: for each array element: m*(X -Y) = (Z-Y)  or m = (Z-Y)/(X-Y)  ,arrays X,Y,Z

The goal is reached, the problem solved if

data_registers array X,Y,Z   result array m : 0 <= m <= 1, not (X-Y) = 0;

array.subtract , array divide, if ==0 , then X==Y -> Z=X=Y .

Todo

Mixing triangle : goal find the mixing parameters  arrays u,v given

unknown array U, given arrays X,Y,Z of reflectance.

[key sumnorm  ]

u =

 

 

RS fundamentals

RS fundamentals are physics and knowledge engineering.
Physics is about hypothesis generation and testing by performing measurements.
Knowledge engineering is about translating application domain questions or hypotheses to likelihoods given prior knowledge and RS measurements.

In optical RS the quantum nature of light is fundamental. Sensors allow the estimation of number of photons captured in a sensor volume by measuring the charge generated in the charge wells of typical charge coupled devices. In the spectral analysis of emitted or reflected photons, the estimation of  matter- and geometry specific absorptance provides most information about matter and processes.

In the thermal infrared part of the photon energy spectrum, emitted photons represent the local outgoing radiant energy. With proper modelling of emittance it is possible to estimate the temperature of the optical boundary layer.