VG-Lite General GPU¶
This is a generic VG-Lite rendering backend implementation that is designed to utilize VeriSilicon's generic API to operate GPU hardware as much as possible.
Even with different chip manufacturers, as long as they use the same version of VG-Lite API as the rendering backend, LVGL rendering acceleration can be supported without the need for LVGL adaptation work.
Configuration¶
Set
LV_USE_DRAW_VG_LITEto 1 inlv_conf.hto enabled the VG-Lite rendering backend. Make sure that your hardware has been adapted to the VG-Lite API and that the absolute path tovg_lite.h, which can be directly referenced by lvgl, has been exposed.Confirm the GPU initialization method, there are two ways:
The SDK calls the GPU initialization function on its own during system startup, and the GPU is available when LVGL starts; set
LV_VG_LITE_USE_GPU_INITto 0.LVGL actively calls the GPU initialization function, and the SDK needs to implement the public function
gpu_init(). LVGL will call it to complete the GPU hardware initialization during startup; setLV_VG_LITE_USE_GPU_INITto 1.
Set the
LV_VG_LITE_USE_ASSERTconfiguration to enable GPU call parameter checking. Due to the complexity of the parameters used in GPU calls, incorrect parameters can result in abnormal GPU hardware operation, such as forgetting to add an end symbol to the path or not meeting the alignment requirements for buffer stride. To quickly resolve such issues, strict parameter checking has been added before each VG-Lite call, including buffer stride validation and matrix invertibility check. When an error parameter is detected, an assertion will occur to print out the error parameter, allowing the user to promptly make corrections and reduce the time wasted on hardware simulation. Please note that enabling this check will decrease runtime performance. It is recommended to enable it in Debug mode and disable it in the Release version.Set the
LV_VG_LITE_FLUSH_MAX_COUNTconfiguration to specify the flush method. VG-Lite uses two sets of command buffer buffers to render instructions, and utilizing this mechanism well can greatly improve drawing efficiency. Currently, two buffering methods are supported:Set
LV_VG_LITE_FLUSH_MAX_COUNTto zero (recommended). The rendering backend will obtain the GPU's working status every time it writes rendering instructions to the command buffer.When the GPU is idle, it will immediately call
vg_lite_flushto notify the GPU to start rendering and swap the command buffer. When the GPU is busy, it will continue to fill the command buffer cache with rendering instructions. The underlying driver will automatically determine if the command buffer has been filled. When it is about to be filled, it will forcibly wait for the unfinished drawing tasks to end and swap the command buffer. This method can effectively improve GPU utilization, especially in scenarios where rendering text, as the GPU's drawing time and the CPU's data preparation time are very close, allowing the CPU and GPU to run in parallel.Set
LV_VG_LITE_FLUSH_MAX_COUNTto a value greater than zero, such as 8. After writing 8 rendering instructions to the command buffer, the rendering backend will callvg_lite_flushto notify the GPU to start rendering and swap the command buffer.
Set the
LV_VG_LITE_USE_BOX_SHADOWconfiguration to use GPU rendering for shadows. In fact, GPU hardware does not actually support shadow rendering. However, through experimentation, it has been found that a similar shadow effect can be achieved by using multiple layers of borders with different levels of transparency. It is recommended to enable this configuration in scenarios where the shadow quality requirements are not high, as it can significantly improve rendering efficiency.Set the
LV_VG_LITE_GRAD_CACHE_CNTconfiguration to specify the number of gradient cache entries. Gradient drawing includes linear gradients and radial gradients. Using a cache can effectively reduce the number of times the gradient image is created and improve drawing efficiency. Each individual gradient consumes around 4K of GPU memory pool. If there are many gradients used in the interface, you can try increasing the number of gradient cache entries. If the VG-Lite API returns theVG_LITE_OUT_OF_RESOURCESerror, you can try increasing the size of the GPU memory pool or reducing the number of gradient cache entries.Set the
LV_VG_LITE_STROKE_CACHE_CNTconfiguration to specify the number of stroke path caches. When the stroke parameters do not change, the previously generated stroke parameters are automatically retrieved from the cache to improve rendering performance. The memory occupied by the stroke is strongly related to the path length. If the VG-Lite API returns theVG_LITE_OUT_OF_RESOURCESerror, you can try increasing the size of the GPU memory pool or reducing the number of stroke cache entries.
NOTE: VG-Lite rendering backend does not support multi-threaded calls, please make sure LV_USE_OS is always configured as LV_OS_NONE.
VG-Lite Simulator¶
LVGL integrates a VG-Lite simulator based on ThorVG. Its purpose is to simplify the debugging of VG-Lite adaptation and reduce the time of debugging and locating problems on hardware devices. For detailed instructions, see VG-Lite GPU Simulator.
Image Decoder Color Format Conversion¶
The VG-Lite image decoder automatically converts certain color formats that are not natively supported by the GPU hardware into compatible formats. This conversion happens transparently during the image decoding process.
The following table shows the color format mapping:
Source Format |
Target Format |
Description |
|---|---|---|
|
|
VG-Lite index formats require endian + bit flipping, converted to I8 for simplicity |
|
|
Same as above |
|
|
Same as above |
|
|
Alpha format expanded to 8-bit |
|
|
Alpha format expanded to 8-bit |
|
|
Converted when GPU doesn't support 24-bit format |
|
|
Converted when GPU doesn't support 24-bit format |
|
|
Separate RGB + Alpha planes merged into ARGB8888 |
|
|
Alpha + Luminance converted to ARGB8888 |
|
|
Byte order swapped |
Notes:
Formats not listed above will return
LV_COLOR_FORMAT_UNKNOWNand be passed to other decoders in the chain.The 24-bit format conversion (
RGB888,ARGB8565) depends on GPU capability, queried viavg_lite_query_feature(gcFEATURE_BIT_VG_24BIT). If the GPU supports 24-bit formats, the decoder will skip these formats and let the default binary decoder handle them.Index formats (
I1,I2,I4) maintain their palette but expand index values to 8-bit for GPU compatibility.Alpha formats (
A1,A2) are linearly scaled to 8-bit (e.g., A1: 0→0, 1→255; A2: 0→0, 1→85, 2→170, 3→255).Compressed formats are not supported. Images with
LV_IMAGE_FLAGS_COMPRESSEDflag will be rejected by this decoder and passed to other decoders that support decompression.