Computation of PDFs on Big Spatial Data: Problem & Architecture
Résumé
Big spatial data can be produced by observation or numerical simulation programs and correspond to points that represent a 3D soil cube area. However, errors in signal processing and modeling create some uncertainty, and thus a lack of accuracy in identifying geological or seismic phenomenons. To analyze uncertainty, the main solution is to compute a Probability Density Function (PDF) of each point in the spatial cube area, which can be very time consuming. In this paper, we analyze the problem and discuss the use of Spark to efficiently compute PDFs.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...