Point Cloud From Drones
Point Cloud What Is It? Point Cloud is a series of data points that have been calculated via set methods and assigned torelative locations in space using a specific coordinate system, these data points can be utilised for several processes including 3D imaging for CAD Models, which may eventually be used to manufacture spare parts. Or for BIM (BuildingInformation Modeling) used for the development of new infrastructure. Point cloud is even used in land survey, digitalelevation models and topographical outputs.
Point Cloud, How Is It Achieved? Point cloud (s) can be achieved using various techniques and processed traditionallyvia photogrammetric methods which is the science of capturing images of a specific subject at different angles, whilstusing a high overlap method of up to 90%. The software recognises the common areas across the multiple images andassigns common data points to each resulting in a fluid point cloud.
Historically point cloud has been processed via the use of terrestrial active remote sensing scanners such as laser or radarwhich measure points on a subject surface resulting in deliverable 3D point cloud files. Drone technology adopts the traditional photogrammetry method when considering point cloud outputs, integratingspecific programs that allow to drone pilot to properly plan a flight allowing for sufficient front lap and side lap. whilsttaking into consideration flight height, speed of data collection, whether or not the drone should stop to capture eachimage and various other variables. All of this can affect the density, resolution, and detail within the point cloud which inturn affects any other deliverable formats derived from the point cloud process.
Point Cloud Via Drone, How Accurate Is It? Point cloud accuracy via photogrammetric techniques can be extremelyaccurate when rectified to its respective grid in the case of UK OSGB36 (Ordinance Survey Great Britain 1936) we havecarried out Geospatial survey and processed point clouds that have been within 5mm. It all depends on the quality andaccuracy of the data acquisition and position and accuracy of the assigned ground control points.
GCP’s or Ground Control Points are essential to line up the model to its respective grid and are achieved by choosing orlaying down high contrast tiles and gathering the coordinates X,Y and Z via use of a GPS rover or similar device. Thecoordinates are inputted into the chosen software and the project is then optimised resulting in absolute accuracy. IfGCP’s are not used the point cloud is still accurate although relatively within its own model.
Point Cloud and other 3D Deliverables… The benefits of adopting drone technology to either assist with your 3Dproject or as a standalone option can pay dividends and can help capitalize on specific parts of project budget. Thebenefits as always with drone technology are cost saving and time efficiency.
Deliverables that can derive from the drone processed 3D deliverables are all topographical outputs such as digital surfacemodel (DSM), Digital Elevation Model (DEM), Digital Terrain Model (DTM), Orthomosaic (Orthorectified Imagepresented as a Geotiff), Contour Files, Mesh detailing and all can be merged with terrestrial data.
How can today's advanced technology solve the challenges that many organizations face after obtaining vast 3D point cloud datasets, including the management, storage, registration, fusion and extraction of useful and actionable information?
Instruments for digitizing the 3D real environment are becoming smaller, more lightweight, lower-cost and more robust. Accordingly, they are finding increasingly widespread usage, not only on surveying tripods for the highest accuracy, but also on mobile platforms such as autonomous vehicles, drones, helicopters, aircraft, robotic vacuum cleaners, trains, mobile phones, satellites and Martian rovers. Lidar uses laser scanning, while photogrammetry records images from one or more cameras which may be moving. Each laser scan records tens of millions of data point positions and colours in a point cloud, and hundreds of such point clouds may be combined. This article discusses the challenges that many companies and organizations face after obtaining vast 3D point cloud datasets, including the management, storage, registration, fusion and extraction of useful and actionable information.
The first challenges users face in performing 3D point cloud data processing include:
Data Storage: The amount of data recorded grows exponentially with time, creating large data repositories.
Processing: The computing power required increases as new algorithms with useful functionality are released and with the volume of data.
Sharing: There are multiple stakeholders spread geographically around the world on mobile platforms who all need to view the most up-to-date data at the same time.
Previously, a software application ran on a dedicated server in a data centre but, if the computer hardware broke down, the user either had to find a backup (which had to be standing by and ready) or would suffer an interruption in service. Many companies guarantee a 24/7 level of service and so cannot tolerate this. However, Cloud Computing now gives users access, over a network, to applications running on a set of shared or pooled servers in a globally communicating network of data centres, giving speed and productivity improvements, resulting in increased competitiveness.
Figure 1: 30 Terrestrial laser scans of a central London library, fully automatically aligned using the Vercator software.
Big data analytics
Users face the difficult challenge of how to boil down the vast amounts of 3D point cloud data to generate useful and actionable information. Current methods for creating Digital Twin BIM models of buildings require users to inspect vast 3D point clouds to manually recognize and mark the outline positions of surfaces, straight edges, walls, floors, ceilings, pipes, and objects, which is time-consuming and susceptible to error. Some semi-automatic methods on laptops require users to recognize and mark part of these and the program finds the rest. Again, such objects can be mislabelled. Fully automatic methods are becoming available on laptops but do not find all the useful information, so users must add and correct what is found. Sometimes the automatic method makes so many mistakes it is quicker for the user to find and mark the structures manually.
“Useful information” in one application may be different from that in another application. For example, in autonomous vehicles, it is an accurate 3D terrain model which can be used for safe navigation. In electricity pylon scanning, it is whether the pylon has its safety warning sign in place clearly visible and whether nearby vegetation is gradually encroaching on the power lines. In railway scanning, it is whether there has been any slippage or sag as well as an estimate of when gradually encroaching vegetation will become a hazard. Electricity supply companies and Network Rail are under UK government obligations to regularly inspect their assets and to perform preventative maintenance to ensure continuity of supply and travel.
Geometrical object recognition
Correvate has developed a suite of machine learning geometric image processing methods for fully automated basic object recognition – walls, floors (figures 2 and 3), edges (figure 4) and pipes (see figure 5).
See below for the rest of the article.
Figure 2: Automatic Wall and Floor Recognition in a recently poured concrete shell of a building under construction in London (16 aligned scans).
Figure 3: Automatic Wall Recognition in a recently poured concrete shell of a building under construction in London (16 aligned scans).
Figure 4: Automatic Edge Detection followed by fitting of straight-line segments in UCL circular/octagonal library under the iconic central dome (21 aligned scans)
Figure 5a (top), Pipe scan and 5b (bottom), Automatic Pipe Recognition in a Boiler Room 3.5 million point cloud; 98% cylinders correctly found (2 aligned scans red and blue).
Artificial neural networks are extremely simplified models of living brains, which are trained and learn like people rather than being programmed by a master programmer. The learned knowledge or skills are stored in a distributed manner in the strengths or weights of the neuron interconnections. Some artificial neural networks learn on their own while others require a teacher or instructor to tell them when they are right or wrong. Gradually, they get better and better at performing a task during the iterative learning cycles which usually take a long time and require thousands of examples of the training data. Artificial neural networks are particularly good at recognition, classification and optimization tasks. However, their performance depends crucially on how they are trained, the types and the amount of training data. Many types of neural network have been developed and, most recently, Convolutional Neural Networks (CNN) used to perform Deep Learning have become very popular and achieve very good results. In the case of object recognition, if the neural networks are only trained with examples of objects one wants to find, then all input data will be classified as one of those objects, even if it is not one of those objects. So, the performance of the neural network is only as good as the way it was trained and the data that was used to train it. Neural networks are not as new as you might imagine given their current popularity in the media. Over 30 years ago, Selviah (1989) proved that the weighted interconnection layer of neural networks performs the same operation as a collection of correlators, operating in parallel, matching images from a database with input data and then the non-linear part of the neurons decide which image matches the input most closely. The clever part is the way in which the training automatically works out what images to store in the database in the first place.
In the conference room image, figure 6, you see the impressive recognition results after training a new type of CNN with data from the Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS) using 70,000 3D objects of 13 types, structural objects: ceiling, floor, wall, beam, column, window, door, and movable objects: table, chair, sofa, bookcase, board and clutter in 11 types of room. Each category of object is marked in a different colour, for example, ‘chairs’ are marked in yellow, ‘boards’ are marked in orange, ‘beams’ are marked in red, ‘door’ is marked in green, ‘walls’ are marked in dark green, ‘floor’ is marked in blue, etc. The accuracy of classification of objects is around 93.5% comparable to human accuracy. The objects to be recognized can be chosen for each application simply by changing the training database.
Figure 6: AI Automatic 3D object recognition. Plan view of original point cloud data for a conference room and 3D recognised objects. The ceiling was removed for clarity in viewing the inside of the room.
Artificial intelligence in the cloud As the AEC sector embraces digital technology, the amount of data produced grows exponentially, creating large data repositories. To generate useful and actionable information from this ‘big data’ requires leveraging smart analytical tools such as AI that are becoming more accessible, especially when hosted from the cloud. Both the cloud computing infrastructure and artificial intelligence supply the tools to leverage and enable digital technology by providing convenient methods of working at scale, thus lowering the barriers to entry for users to these new ways of working. Artificial intelligence (AI) neural network and deep learning require vast databases of thousands of examples for training, which can be conveniently stored in elastic expandable cloud storage on demand. AI software requires highly parallel processing on many parallel processors to carry out the training in a reasonable time, again easily available in cloud computing infrastructures.
Intelligent combination and use of available techniques such as laser scanning, automatic alignment, cloud computing and artificial intelligence can not only speed up analysis of vast data sets but also improve accuracy and release human activity to ensure that a product is correct and useful.
BENEFITS OF USING THE CLOUD Auto-Application Updates: applications are updated automatically, so the user always has access to the most up-to-date optimised software and bug fixes. Responsivity: dedicated development support teams continuously monitor user experience to optimise and, if necessary, rewrite code. Scalability, flexibility and agility: Scalable elastic cloud environments on pools of servers, storage and networking resources scale up and down according to the number of users and the volume of their usage. They automatically scale up and down as users’ needs change. Capital expenditure free: users have access to the highest power computers. There is efficient use of hardware as users do not need to purchase, manage and maintain large amounts of computer and storage hardware, resulting in lower hardware, power, cooling and IT management costs. Users only pay for what they use as the cloud resources automatically scale, so it is easier for small businesses to manage their business at any time of day, from anywhere. High speed: multiple computers run in parallel so many different parts of the same point cloud can be processed at the same time and many different users have no effect on speed or quality. Security: the data is stored and communicated securely with a level of encryption chosen by the user. If security is a paramount concern, the software can run on a private cloud without internet connections in-house. Clouds can be configured to make use of certain data centres, such as within one country if intercountry security is a concern.
Availability: if one server is busy or not available then another server takes its place to provide full availability. Disaster Recovery: data is stored in multiple locations at the same time so if storage hardware in one data centre breaks down, the calculation proceeds with little interruption as the data is backed up elsewhere. Data archiving facilities are automatically provided. Latency: if latency is important, the cloud can be configured so that local clouds provide low latency to the user. Increased collaboration: many users, located globally, and mobile users, can store, process, share and view datasets at the same time without any loss of speed or responsivity. Reliability: the application software can make use of resources on cloud computing infrastructure provided by different vendors in different global regions. Forward compatible: an open cloud architecture is forward compatible to match higher power computing resources as they are rolled out. Sharing: all point cloud datasets are secure in one place and accessible at any time from
Author: David Selviah
Last updated: 04/08/2020
MAY 30, 2018
Drones are often celebrated for their ability to capture a new vantage point on the world, revealing the beauty of our planet from high above. But they are only the latest development in a long history of aerial photography. For hundreds of years, airborne cameras have made awe-inspiring images of our planet, revealed the devastating scale of natural disasters, and tipped the scales in combat. And in some surprising ways, the history of aerial photography dovetails with the last century of human history more broadly.
It wasn’t long after commercial photography was invented in the mid-19th century before “adventurous amateurs” launched cameras into the sky using balloons, kites and even rockets, according to Paula Amad’s 2012 overview of the history of aerial photography, published in the journal History of Photography. Gaspar Félix Tournachon, more commonly known as “Nadar,” is credited with taking the first successful aerial photograph in 1858 from a hot air balloon tethered 262 feet over Petit-Bicêtre (now Petit-Clamart), just outside Paris; his original photos have been lost. James Wallace Black’s 1860 aerial photograph taken from tethered hot air balloon Queen of the Air 2,000 feet above Boston is the oldest surviving aerial photograph.
George Lawrence later perfected a method of taking panoramas from above by strapping large-format cameras with curved film plates to kites. His most famous such photograph captured the damage caused by the devastating 1906 San Francisco earthquake and fire; he used 17 kites to suspend a camera 2,000 feet in the air to record the image. “Exposures were made by electric current carried through the insulated core of the steel cable kiteline; the moment the shutter snapped, a small parachute was released,” explained Beaumont Newhall, the Museum of Modern Art’s first photography curator, in Airborne Camera: The World from the Air and Outer Space. “At this signal the picture was taken, the kites were pulled down and the camera reloaded.” Syndicated in newspapers nationwide, Lawrence’s images were “at the least, a very early example of an aerial news shot — and perhaps the first,” says William L. Fox, director of The Nevada Museum of Art’s Center for Art + Environment and co-author of Photography and Flight.
Around the same time, aerial photography pioneers elsewhere in the world were experimenting with other methods. In 1903, German engineer Alfred Maul demonstrated a gunpowder rocket that, after reaching 2,600 feet in just eight seconds, jettisoned a parachute-equipped camera that made photos during its descent. That same year, German apothecary Julius Neubronner, curious about his prescription-delivering pigeons’ whereabouts, strapped cameras to his birds to track their routes. (Neubronner also used his birds to take photos of the 1909 Dresden International Photographic Exhibition, turning them into postcards and foreshadowing modern drone marketing stunts by over a century.)
It was just a few years after the Wright Brothers’ first flight at Kitty Hawk in 1903 that piloted, powered aircraft were first used for aerial imagery. cinematographer L.P. Bonvillain took the first known such photo in 1908, photographing from an airplane over Le Mans, France that was piloted by none other than Wilbur Wright himself.
World War I consumed the world shortly thereafter, and military commanders soon saw the potential advantage offered by up-to-date aerial imagery of the battlefield. Cameras were equipped on all manner of aircraft, and the wartime practice of aerial reconnaissance was born. Later advancements in both aviation and photography meant flight crews could go farther and come back with more useful images, which were often used to reveal enemy movements or plan future attacks.
It was during World War II that wartime aerial images and video became commonplace in newspapers, magazines and movie theater newsreels on the homefront. Famed LIFE photographer Margaret Bourke-White became “the first woman ever to fly with a U.S. combat crew over enemy soil” when she covered the U.S. attack on Tunis, as the magazine declared in its Mar. 1, 1943, issue. It was also during this conflict the U.S. began to experiment with rudimentary drone aircraft, like the TDR-1, though that was an attack aircraft rather than an imaging platform.
The end of World War II and the beginning of the Cold War brought even further advancements to aerial photography, particularly thanks to the Space Race. The first known photo from space, depicting a glimpse of Earth, was taken on Oct. 24, 1946, by a captured Nazi rocket launched from New Mexico. The United States and the Soviet Union’s efforts to outpace one another’s aerospace achievements led directly to the development of satellite imagery, the ultimate in unmanned aerial photography. The power of such technology to spy on adversaries or help warn of incoming nuclear attack was not lost on the leaders of the era. “If we got nothing else from the space program but the photographic satellite, it is worth ten times over the money we’ve spent,” once said President Lyndon B. Johnson. Today, there are more than 1,700 satellites orbiting Earth used for surveillance, weather forecasting and more, according to the Union of Concerned Scientists.
The first modern-style drones began to appear in the 1980s, as Israeli engineers developed models equipped with video cameras to monitor persons of interest for hours at a time. The U.S. soon adopted similar technology — a remote-controlled Pioneer drone famously filmed Iraqi soldiers surrendering to it during the first Gulf War. The Predator drone, invented by Israeli aerospace engineer Abraham “Abe” Karem, rose in popularity during the Afghanistan and Iraq wars for its ability to loiter over areas for an extended period of time, making it useful for monitoring the daily routine of potential targets. (A similar, larger variant called the “Reaper” also became widely used during these conflicts.) The U.S. military has also used smaller, hand-launched drones like the RQ-Raven to give soldiers an overhead look at potential dangers ahead without jeopardizing their safety. (The use of armed drones is among the most controversial modern military subjects — proponents say they are effective military tools that put fewer pilots at risk, while detractors argue they dehumanize killing, contribute to civilian casualties, and have been used without proper oversight in places like Yemen, Somalia and more.)
Any given technology, by rule, tends to get cheaper and more accessible over time. The same has been true of drone equipment, and by the early 2000s, a do-it-yourself drone-builder culture started to emerge out of the longstanding remote-controlled aircraft community. Online forums like DIY Drones helped hobbyists share tips and tricks with one another. New hardware and software like stabilizers, autopilot and collision detection systems have since given rise to store-bought drones from companies like Parrot and DJI with high-resolution cameras, making aerial photography more accessible than it’s ever been before.
That is precisely what makes today’s everyday drones so remarkable. Until just a few years ago, the pursuit of aerial photography was mostly limited to the military, dedicated hobbyists, and people with access to full-size aircraft. Today’s store-bought drones are comparatively cheap, take high-quality images and video, and are easy to learn to fly. That combination has led to an explosion in aerial photography, ranging from commercial uses, like real estate brokers getting eye-catching photos of houses they’re trying to sell, to artistic expression, like taking beautiful images of forests and cities to post on Instagram — no kites or pigeons required. While the technology has changed dramatically over time, the human desire to see the world from above has been a constant.