Diese Seite ist aus Gründen der Barrierefreiheit optimiert für aktuelle Browser. Sollten Sie einen älteren Browser verwenden, kann es zu Einschränkungen der Darstellung und Benutzbarkeit der Website kommen!
Geophysics Homepage
Log in

Modeling Concurrent Data Rendering and Uploading for Graphics Hardware

Speaker: Markus Wiedemann (LRZ/LMU)
Graphics rendering hardware often contains special purpose components to maximize utilization of its Graphical Processing Unit (GPU). Examples are dedicated memory and copy engines to directly access the graphics memory without blocking GPU processing. Considering that graphics memory is always limited, while datasets produced today are ever increasing, specialized methods, optimizing fast data exchange between host memory and graphics memory, are needed for real-time visualization. Understanding the complex relations involved in concurrently rendering and uploading requires sophisticated model, describing the hardware and the respective data flow. Input for such models includes the size and structure of the dataset under consideration, the number of parallel threads for processing it, and modular and exchangeable functions performing the data transfers as well as parameters of these functions for optimal data access. The models include additional system parameters, such as the concrete hardware used, the various settable clock rates, the operating system as well as the used programming interface to communicate with and on the hardware. In this work we describe a methodical approach to derive two of such models in a systematic fashion. This results in the Model for Asynchronous Rendering and K-time Uploading (MARKU) for both, rendering and uploading, respectively. This constitutes processing of data using graphics hardware and moving the data from host memory to graphics memory. Our methodical approach relies on measuring the effects on both components, the host and the graphics hardware, individually while data movement and processing are executed concurrently. A broad set of experiments is performed, disconnecting inter-process and data dependencies as basis for a statistical evaluation. This allows to identify individual as well as combinatorial influences of the evaluated control variables. Using design of experiments approaches reduces the number of necessary measurements and allows to quickly derive a mathematical description of the underlying processes. This permits to predict performance expectations for specific use cases. Finally, we evaluate our approach in a multi-step process to gain a broad understanding of accuracy and precision of the two MARKUs for rendering and uploading. Based on this work, future development of memory transfer optimizations are possible that balance the predicted impact on rendering with the performance of the data transfer, leading to improved real-time realizations even for large data-sets.
by Marcus Mohr last modified 03. Dec 2020 09:23
ImprintPrivacy PolicyContact
Printed 08. Dec 2021 05:02