Wednesday, October 21, 2015

Activity 6: Using the GEMs software to construct geotiffs, and to field check your GEMs data

Introduction

The GEMS sensor is made by Sentek Systems out of the Twin Cities. GEMS stands for Geo-localization and Mosaicing Sensor. It is mounted to the UAV in a Nadir position (Figure 1) and is very light, only 170g, so it can be used on a wide variety if platforms. It is composed of a 1.3 megapixel RGB and MONO camera which simultaneously collect images. Having its own GPS and accelerometer allow for image capturing that can be fully autonomous. All images are saved to a flash drive in a .JPG format for easy use in the GEMS software tool

The GEMS software is designed to make creation of image mosaics simple for the user.The software finds the .JPG files on the flash drive and creates "orthomosaics" in RGB, NIR, and NDVI formats. It also gives the GPS locations of the images and the flight path. 

Software Workflow

When the user purchases the GEMS sensor they also receive a free version of the GEMS software. The software is very easy to use and I think has a very good user interface. To create mosaics from the .JPGs collected during a flight the user will complete the following steps.
1. Locate the .bin file in the Flight Data folder on the USB. The file structure on the USB is Week = X or the GPS week and TOW = Hours - Minutes - Seconds.
2. Once that .bin file is loaded you will see the flight path of the mission you are working with. (Figure 2)
3. Next before creating the mosaic you want to run a NDVI Initialization where the software runs a spectral alignment. This assure that the spectral values are consistent and the software is also choosing which NDVI color map best suits the given data set.
4. After the initialization is complete go back to the run menu and click on the generate mosaics tab. For most users a fine alignment is the best option. This will give you the best results. 
5. Once the mosaics are created you can view them in the software by going to the images tab and selecting the mosaic you would like to view.
6. One last step that is optional is exporting the Pix4D file. This file contains all the GPS locations of the images which can be opened in Pix4D, an image processing software, where a 3D representation of the images would be created.
Figure 2 This is the resulting flight path in the GEMS software when the .bin file is loaded. This is the flight from which all the imagery in figure 3-11 was derived.

Generating Maps

Once the mosaic has run there will be tile a tile folder wherever you had the file location set. Inside these folders are a .tif file. .tif files have GPS location inbedded in them or they are georeferenced. This is nice because the user can then easily put them in ArcMap or a similar software and create maps with them. They will be laid on top of a basemap and be in the right place on the surface of the earth because of that GPS data they contain. This does not mean they are orthorectified however. In order for an image to be orthorectified it must be tied to a DEM or digital elevation model. This makes elevation correction in the image and then it will be a true representation of the surface of the earth. In the GEMS software you are not adding elevation data to these images so they are not orthorectified mosaics like they claim. In ArcMap the user can create maps of the different mosaics, edit the shape of the mosaic as they see fit and also pull the images into ArcScene which will give a poor 3D representation of the area in the image but that can be fixed it the user lays it on top of a DEM for the area. Figures 3-11 are the maps I created using the .tif files produced by the GEMS software.
Figure 3 This is the RGB Mosaic
The RGB mosaic is easy to compare to the basemap imagery provided be ESRI. If you were to zoom in it would become very clear that the image resolution of the basemap is not nearly as good as that of the data collected with the UAV. This increased pixel resolution makes NDVI analysis and looking at vegetation more accurate. The UAV image overall has much higher detail than the basemap. The GEMS sensor only has a 1.3 mega pixel camera on it so a with a camera like the SX260 which is a 12 mega pixel resolution you get even higher detail and an even clearer image. 
Figure 4 This is the FC1 NDVI Mosaic
The color scheme for the FC1 NDVI shows healthy vegetation as oranges and reds. This is backwards to how most people think but from the image you can see that the grass areas are orange meaning they are healthy and the blacktop or concrete areas as well as the roof of the pavilion are blues and blacks meaning poor to no health. An NDVI is basically looking at how much water vapor the area is giving off. When plants are going through photosynthesis they have more moisture content and that is what the software looks at to create the NDVI.
Figure 5 This is the FC1 NDVI Mosaic in ArcMap

This version of the FC1 is slightly different than the one above. When I brought it into ArcMap I changed the color scheme for the reflectance values. This give you a better idea of the areas that are healthy and which aren't. Again red is healthy green is dead in this color scheme.
Figure 6 This is the FC2 NDVI Mosaic
FC2 is showing the reflectants level in the way most people are used to seeing them. Green is good and red is dead. This color scheme makes sense. You can see the same areas in this NDVI are green that were red on FC1 which means healthy area. The red area on this map are the same as the black or green areas on the FC1, The two maps are displaying the same values just in a different color scheme.
Figure 7 This is the FC2 NDVI Mosaic in ArcMap
I took the FC2 and changed the colors in ArcMap just as I did with the FC1. You can see there is better vegetation value variance in this color scheme than in the GEMS software image. ArcMap breaks the values down into more categories which gives you the wider range of colors in the map. Again red is dead and green is good.
Figure 8 This is the Fine NDVI Mono Mosaic
The fine NDVI mono shows the reflectance levels as a high to low. The GEMS software does not assign a color scheme to this NDVI. Healthy high reflectance areas are white and low health is gray to black. This fine NDVI is better than just the mono NDVI (Figure 10) and by better I mean more accurate.
Figure 9 This is the Fine NDVI Mono Mosaic in ArcMap
This is that same NDVI in figure 8 only I assigned a color ramp to the values in ArcMap. The green areas are healthy and the oranges and reds are dead even the path which is yellow is also dead. The mono NDVI gives you the ability to choose which ever color scheme you prefer in ArcMap to display the data.
Figure 10 This is the NDVI Mono Mosiac
This is the normal mono NDVI. This one isn't as detailed as the fine mosaic. Again it is showing the reflectance levels. Below I brought it into ArcMap and changed the color ramp.
Figure 11 This is the NDVI Mono Mosaic in ArcMap
If you look in any of the maps above in the mosaic from the UAV you will see some striping that doesn't look quite right. The stripes run from the lower left to upper right of the images. This is an error in the mosaic created when the images were being stitched together. If you look very closely you can see that those stripes are where two or more images come together. This could be an error when the GEMS software ran the NDVI initialization and it is supposed to make all the values uniform. There are not big dead patches of grass in the soccer park.

Discussion/ Critique

Pros

The GEMS sensor and software package give the user a very simple and straight forward way to collect aerial imagery and also run simple analysis on the data. Its light weight, small size and being a fully autonomous sensor makes it ideal for a wide variety of platforms and situation.

The software is very easy to use and the user interface is very easy to follow. A person with very basic of UAVs and vegetation data analysis is able to use the software and produce useful mosaics. The file structure that is established while collecting data and running the analysis makes it very simple to stay organized and find the different file types created. 

Cons

There are two areas that would greatly improve the usefulness of this sensor. First is the very poor pixel resolution. 1.3 mega pixels is terrible when thinking about how advance camera technology is. This low resolution limits what this sensor can be used for. The cameras may be sufficient for agricultural application however this sensor can and should be used in other industries. Surveying mines and doing volumetric analysis is one area that this sensor would be great with better cameras on it. In order to do the volumetric analysis you need very high detail images, 20 to 40 mega pixels. The SX260 camera which is a 12 mega pixel could also be used. This would not a hard problem to fix with the GEMS sensor. With phone camera technology as advanced as it is one of those cameras could easily replace the existing camera and make the unit more versitile. Another area when this sensor would be great to use with better cameras in search and rescue applicaitons. High resolution images are needed when you are looking for an object in the woods or elsewhere. This sensor is so light that it does not hinder flight time very much and with better cameras could be very useful in this industry. Even in agricultural applications better cameras would be ideal. Right now you can look at the overall health of a large area but in some fields of agriculture like vineyards they want to look at each individual plants' health. You need a much higher resolution camera in order to do that. My recommendation is that a camera of at least 12 megapixels be implemented into the GEMS sensor. This would greatly expand the areas in which it could be used.
The other area that needs improvement is also linked to the camera. The field of view for the present camera is only 30 degrees which is basically straight down. This reduces distortion in the images but it also makes the flight grids way too tight with the sensor. Close flight lines means longer flight times to cover small areas. Many platforms don't have flight time past 20 minutes to a half hour and when using the GEMS sensor they may have to do 2 or 3 flight to cover an area that could be covered in one flight with a different camera. I would suggest a camera with at least a 60 degree field of view. Some may argue that the images will be distorted but that is not true. If the image over lap is sufficient, usually about 75 percent, the distortion will be very minimal. 

For the software the only suggestion I have for improvement is the file naming convention for when the flight were taken. The format as it is right now needs a online translator to get it into a date that the average person can understand. A format of Month-Day-Year and Hour- Minutes-Seconds would be much easier to understand.

In conclusion, I do like this sensor and software but my continued use of it will hinge on whether or not improvements to the camera are made. The present camera really limits what this sensor is used which is unfortunate because there are a lot of very convenient features that make this sensor a go to for imagery collection on a UAV.





Wednesday, October 14, 2015

Activity 5: Obliques for 3D modeling construction

Introduction

This week we used a completely different image gathering technique. All the other image collecting mission have been flown at 90 degrees or in a Nadir format (Figure 1). Nadir imagery is great for looking at spatial relationships between objects on the ground but it does not pick up on 3D objects very well. You only get the rood roof of the structure. This week we collected the images at a 45 degree angle or in an oblique format (Figure 2). Collecting oblique images allows for the construction of 3D images, which is a big up and coming area of study in the UAV industry. By taking the photos at a 45 degree angle you can get the side of the buildings from all angles and also fly the roof structure giving you a near perfect 3D representation of the structure you are imaging.
Figure 1 This is a Nadir image collected a few weeks ago of the pavilion from about 60m up. The camera is facing 90 degrees to the Earth's surface and does not pick up on 3D objects very well. In a 3D rendering of this image the roof of the pavilion would be higher than the area around it but the sides of the building would not be modeled they would just be black spaces or nothing.

Figure 2 This is an oblique image collected at 45 degrees. Multiple pictures taken from this angle allow for the construction of 3D models.

Study Area

The study area for our 5th activity and final activity in the field for the semester was again the Eau Claire Sports Center soccer complex (Figure 3). It was a beautiful day, couldn't ask for better weather in the middle of October. It was around 65 degrees, slight winds with light wispy clouds and a lot of sun. We flew the image collection mission right away to assure we had proper lighting on the structure because the sun goes down pretty early this time of year. For the oblique imaging we chose the pavilion in the middle of the fields. It is a + shaped building approximately 5m high to the roof peaks. It is in a good location to fly because there are no tress or flag poles near by to interfere with the image gathering.
Figure 3 This the soccer complex where we conducted this weeks flights. The red circle is around the pavilion we flew to create a 3D model of in the next few weeks.

Methods

We used two different platforms this week to gather imagery as well as two different cameras. The first mission was flown with the Iris (Figure 4) platform made by 3D Roboics. Attached to the platform was a GoPro HERO3. The GoPro isn't the most ideal sensor for this application because it does not have an internal GPS. In order to make a 3D model with the GoPro GPS locations form the flight log are joined to the JPG files to create the model. The Mission Planner app also made by 3DR has a function called structure scan. In this function you find the structure you want to fly on the base map of the app. Then you click a center point directly over the center of the structure. You then set the radius of the circle, how high the platform should fly and how many times it should change altitudes during the flight. For the Iris flight Dr. Hupy set it to start at 5m altitude and take a picture every 2 seconds as it flew in a circle around the pavilion. The Iris then would increase altitude by 4 meters each time it went around the structure all the way up to 26m. It is easy to visualize if you think of the drone flying a corkscrew pattern around the building. Once it reached 26m Dr. Hupy set a cross pattern across the top of the roof to ensure that the roof structure was adequately imaged.After the Iris completed its auto mission I flew it manually around the building at approximately 2m to get the lowest part of the pavilion as well as under the over hangs so that when we create the model even the areas under the roof canopy will be imaged and not just come up as black space. The auto pilot app for gathering oblique imagery was really cool to see work as this was the first time any of us had ever used it. As always I was at the controls ready to take over if the Iris got too close to the building.The flight was only about 5 minutes of auto pilot and it collected about 150 images so doing a structure model like this doesn't take a platform with long battery life. That is why the Iris is good platform to use for this kind of data collection. Its compact size is also ideal.
Figure 4 This is the Iris with e GoPro mounted on a gimbal. The gimbal allows the user to rotate the camera up and down, left and right to get the desired angle for imagery collection. 


The other platform we used to fly the pavilion was the DJI Phantom (Figure 5). The Phantom is another very compact platform with a short battery life ideal for this application. It is also equipped with a very high resolution camera that comes as part of the platform. The Phantom geotags all the images it takes or in other words assigns a lat/long position to each image which makes 3D model creation much easier because the image processing software looks at those geotags and arranges the images according to them. There is no need to use the log file GPS points like when you use the GoPro. The Phantom mission was flown completely in manual. There is an auto pilot app for the Phantom but Dr. Hupy has been having trouble with the app crashing mid-flight which is a safety hazard and does us no good if we can't image the whole structure. Everyone in the class got a chance to fly the Phantom and collect images. The same corkscrew pattern was flown, as best as could be done, and the cross-hatch over the roof was also done just like in the first mission. The camera was kept at approximately 45 degrees during the flight to ensure we were collecting oblique imagery and not Nadir. The Phantom flight took about 15 minutes and just under 200 images were collected.
Figure 5 This is the DJI Phantom. The camera is also on a gimbal but this camera is part of the drone unlike the GoPro. The geotage feature of this platform makes it ideal for image collection and the very compact size makes it perfect for structure modeling, because the user can get into small areas.

Discussion

Oblique imagery collection is a totally different method than Nadir image collection and you have to have a different mind set when flying this type of mission. There is more attention to detail because you have to make sure every inch of the structure is imaged. One image for each surface will work but multiple overlapping images on each surface will give you better 3D model and greater detail. Both of our flights probably had way more images than are necessary. The Iris auto pilot flight collected around 150 images and you could probably create a 3D model from 50 or so images but our model will be better. Image overlap is important in oblique imaging just like when collecting Nadir images. It would've been interesting to see the Phantom fly an auto pilot mission to see how it flies compared to the Iris. The Phantom has a tendency to fly smoother than the Iris, this is especially true when I have flown it in manual. I am excited to see the 3D model in a couple of weeks when we run this data. This summer Dr.Hupy imaged a shed at South Middle School garden and it turned out really well so it will be interesting to see how this one turns out and compare the two. 

Conclusion

Oblique imagery is a great way to create 3D models. There are many applications for this type of imagery collection. Bridge inspections, roof inspections, and insurance damage claims are a few areas where I see this technique being very useful. You can create 3D models and with the high picture resolution and insane detail capabilities with some of the platforms available you could easily find cracks, dents or whatever you are looking for in the inspection and not only be able to look at it live but also have a 3D model that you can look back on later. The reality business could easily use this for creating virtual tours of properties or at least being able to click and drag a 3D model of the house on their website. This would make it more interactive for the home buyer and not only be able to look at a bunch of JPGs but pan and zoom on the whole structure again with crazy detail. As the UAS field continues to grow and rapidly expand it will be interesting to see what other applications this can be applied to.

Wednesday, October 7, 2015

Activity 4: Gathering Ground Control Points (GCPs) using various Global Positioning System (GPS) Devices

Introduction

This weeks activity was all about collection of Ground Control Points (GCP's). GCP's are vital to collection of high quality aerial data. They are used to tie down the images collected to the surface of the earth and can increase the spatial accuracy of those photos to as close as millimeters from the actual location on the Earth's surface. The opposite is also true that if the GCP's you collect are not accurate they can distort the data and make it useless. Use of GCP's is important especially when working with temporal analysis such as vegetation change. The pixels in all of those images need to line exactly over each other from each weeks image in order to be able to display the vegetation change accurately. After we collected the GCP's we then conducted a flight with the Matrix platform so that we had some data to use these the GCP's later in the semester.

Study Area

This week we met at the community garden and swamps in Fairfax Park (Figure 1) behind South Middle School which is about 5 minutes from campus. During summer in my work as the GEI technician and the my research this summer with Dr. Hupy we flew this area every week so it is a familiar area to me. The weather was pretty decent it was about 63 degrees with winds 3 to 5 MPH. It was a mix of sun and clouds but being late in the day and approaching the winter months the sun was quite low in the sky.
Figure 1 This was our study are for activity 4. The yellow are is where the GCP's are placed inside of and where the imagery collection flight was flown with the Matrix platform and an SX 260 camera.

Methods

The first thing we did when we got to the study area is we set out GCP markers (Figure 2). These markers need to be an object easily seen in the imagery that the UAS collects. If they are too small or hidden by ground cover they do you no good. This gets into the considerations when placing GCP targets. First is what I mentioned above. The targets have to be visible. Avoiding trees or other ground plants that would cover the target as well as making sure the targets are large enough that they can easily be seen in the imagery. Next you want to think about the spacing of the markers. It is mandatory to have at least 3 GCPs in an area that you are collecting data in. In places that there is dramatic elevation changes more should be used. The GCP's are what gives the computer the elevation data for an area so if you want to accurately model the terrain in an area more GCP's are essential. They need to be spread out as well. Putting them right next to each other or in a line does not do any good. You want them to be spread out and many times they will end up being placed in a triangle pattern to ensure proper spacing. Aerial images get more distorted as you get further away from the center of the image we can put a couple GCP's towards the edge of the study area to help correct this however you want to make sure the majority of the GCP's are spread out equally throughout the study area. Taking notes while placing the markers is good practice. If for some reason you lose a marker or your lat/long positions of the markers gets messed up having a drawing of the approximate location of the GCP's is important.Field notes are always important to take while in the field. Even if you don't think something is important to record, you never know it could really be helpful later on during image processing or years later if you want to remember how you did a specific field activity. I made a drawing of the area we were working in (Figure 3). It may not be super accurate but it gives me a general idea of where all the markers should be.
Figure 2 This is one of the ground targets we used that will show up in the aerial imagery. We collected the GPS location for each of the 6 markers like this one we placed out.
Figure 3 This is the field drawing I made while we were collecting the GCP points. You can see it gives the general arrangement of the area and has the approximate location of the GCP markers. Sketches like this should be made whenever you are out in the field.

Taking into consideration what I discussed above the class set out 6 GCP's around the study area. Then we collected the GPS positions for each of them with 4 different units. This was done by placing the GPS unit directly over the center of the ground target and collecting the point (Figure 4). The first is the Topcon survey grade GPS unit (Figure 5). This was our gold standard for GPS accuracy. This unit collects points to with in a few millimeters of the actual location. It is a very expensive unit, around 5,000 dollars, but when precision matters this the unit to use. The second unit was the BadElf Survey Grade (Figure 6) unit which runs about 500 dollars. This is supposed to be a pretty good unit with an error variance of about 1 meter. We were expecting this to be the second best accuracy wise but when we ran the data and compared it with the Topcon points and the other units it wasn't extremely accurate. The Garmin GPS (Figure 7) unit we used was the second most accurate. This unit is around 200 dollars to purchase and looking at the results map (Figure 9) comparing the units it is much more accurate then the BadElf and not bad compared to the Topcon unit. Finally we used Dr, Hupy's iPhone (Figure 8) to collect he GCP location. Cell phones should never be used to collect GCP coordinates. They are terrible. The only reson Dr, Hupy used this method is to show the class how inaccurate they are compared to the other units we have available.
Figure 4 Dr.Hupy demonstrating how to collect points with the Topcon GPS unit. Notice how the unit is directly over the center of the target.
Figure 5 This the Tesla ground station component of the survey grade GPS unit we have on campus. You can collect GPS with just this device but they are not super accurate. This gives you the easy user interface.

Figure 5 This is the Topcon survey GPS unit itself. This connects to the ground station via Bluetooth. With this unit you get accuracy within millimeters of the actually location on the Earth's surface. 
Figure 6 This is the Bad Elf survey GPS unit. It is nice and compact which makes it convenient for feild work and the accuracy is supposed to be pretty good with it. In this activity we found the accuracy to not be as good as it is supposed to be.
Figure 7 These are Garmin GPS units similar to the one we used for this activity. Compact size makes them good field tools and the accuracy is pretty good for the price of the unit.
Figure 8 This is your average iPhone. NEVER USE THIS TO COLLECT GCP POINTS WITH. The accuracy is terrible.
Figure 9 This is the results of the 4 different GPS units we used. You can see the Garmin unit was second best to the Topcon unit and the Bad Elf we after the Garmin. The iPhone points are terrible as expected. These location are where the GCP markers were placed and when the imagery is processed we will tie the images to these GPS points.

The final portion of the activity was to fly a mission over the swamps where we placed the GCP's. As always we went through all the preflight checks and took all the same safety precautions as any other time we fly. Having a PIC, PAC and spotter. Please see Activity 2 and Activity 3 for the pre flight procedures and image collection details. It was gettting close to dark when this mission was flown so it will be interesting to see how the images turn out. I am anticipating there will be alot of shadows and dark areas that the processing software will have a hard time with so we may lose chunks of the data. That will be determined when the data is processed in a couple of weeks.

Discussion

This was a very hands on activity for the class. It was good to have an activity that the whole class could do at the same time instead of being split into groups like some past activities. The biggest thing I took away from this activity is that GCP placement and the quality of your GPS unit are what makes or breaks your GCP data and eventually the accuracy of your imagery data. We have access to some of the best GPS units money can buy. The Topcon unit is top of the line and it shows with the accuracy of the points it collects. This isn't always necessary.For our purposes in this excercise using the Garmin or BadElf GCP points would give a good accuracy in our aerial imagery and the Topcon is overkill.In a situation like the vegetation change I talked about at the beginning the Topcon would have to be used because the other units aren't accurate enough. Another instance where you would want the Topcon is when creating a terrain model of a mine or other area with large elevation changes. Just like choosing a UAV platform and sensor is mission specific so is the GPS unit used for GCP's. Knowing the level of accuracy you need and what unit will give you that accuracy is vital to good field work and data collection. Location and number of GCP's is the other piece that is very important. A lot of the placement is common sense but just being aware of your surrounding and the lay of the study area, as always in field work, will greatly increase the quality of work you conduct. Being able to tie all this information into the data processing step is another valuable skill that we will learn in the next couple of weeks.