Skip to main content

Appendix 2 Introduction to the Principles of Image Drawing

Images and pixels

Images are categorized into vector and bitmap, and all the images mentioned in this article refer specifically to bitmap.

Bitmaps, also known as dot images or drawn images, are composed of individual dots called pixels (picture elements). These dots can be arranged and colored in different ways to form a pattern. When zooming in on a bitmap, you can see the myriad of individual squares that are relied upon to make up the entire image.


RGB color space and color depth

Pixel points in an image are given a corresponding color, allowing us to see a rich picture. The color of a pixel point is usually described using the 3 color components R,G,B (red, green, blue), and different values of R,G,B represent different colors, constituting the RGB color space. For a description of the RGB color space, please refer to [https://baike.baidu.com/item/RGB颜色空间](https://baike. baidu.com/item/RGB color space) .

Color depth refers to the number of bits needed to describe the color value of a pixel. The larger the color depth value, the more colors the pixel can display, but this also means the larger the image file:

formatdepthbit distributionnumber of colors that can be representeddescription
RGB56516 bitsR(5bits)G(6bits) B(5bits)2^16 kindsi.e. RGB565
RGB88824 bitsR(8bits)G(8bits) B(8bits)2^24ie RGB888
RGBA888832 bitsA(8bits) R(8bits) G(8bits) B(8bits)2^24Compared to RGB888, there is an extra 8bit A (alpha,transparency) for image blending

Currently, most of the screens we use in the display use 24bits of color depth, our GPU in the drawing of a unified 24bits of color depth, while in order to achieve the image of the mixing operation, adding 8bit transparent (A, alpha). When the input image is less than 24bits color depth, and then image mixing is we will first expand the color depth to 24bits depth, for example, RGB565 will be through a certain algorithm, the original R(5bits), G(6bits), B(5bits) are expanded to 8bits.

Image Blending and Transparency

1. Image blending formula

The transparency of an image (alpha, A) is used in image blending operations. In math, A is a decimal number between 0 and 1. The formula for blending two images to be transparent is as follows:

Where FG represents the foreground image, BG represents the background image, out represents the output image, the subscript A codes the A component and RGB represents the corresponding RGB component.

2. Image mixing process and transparency

Each interface is the result of many images mixed on top of each other, with a hierarchical relationship between the images,

Transparency affects the mixing effect of the final image, the following figure demonstrates the result of mixing images 1,2,3,4 in order.

(1) In order to facilitate the demonstration, the transparency (alpha, A) of each pixel in each picture (layers 1, 2, 3, 4) is set to the same value, respectively, 1, 1, 0.5, 0;

(2) Figure a, because the transparency of the pixel points in layer 2 (FG) is all 1, the mixing result (out2) in layer 2 (FG) completely obscures layer 1 (BG), showing a completely opaque effect;

(3) In Fig. b, the result out2 of the mixing of layers 1,2 is mixed with layer 3 (FG) as the background image (BG), because the transparency of layer 3 is 0.5, and in out3, layer 3 exhibits a semi-transparent effect;

4) In figure c, out3 is mixed as background image (BG) with layer 4 (FG), because the transparency of layer 4 is 0, the layer shows completely transparent effect in out4, i.e., after mixing, there is no difference between out4 and out3;

5) The background image is equivalent to the paper on which the layers are placed and must have the transparency set to 1 or use an image without transparency.


Demonstration of image blending effect

Image formats and effects

In the process of designing the interface, the flow of image processing is shown below:


The original design is the interface display effect, the original design is layered, cut, and output the material needed for interface design, and our interface design tool further processes the material to generate the picture information needed for GPU to draw the interface.


1. The format and impact of material images

At present, our company supports the material format including bmp, jpg, jpeg, png four formats, that is, the original design in the layering, cutting the output of the material picture should be the above four formats. The following table briefly introduces the difference between different material image formats and the possible impact.


jpg, jpeg format pictures are lossy, the picture color value changes the more sharp caused by the loss will be greater, which will make such as "text", "digital" edges become fuzzy, the distortion phenomenon is more obvious, in this case, you should try to avoid the use of In this case, the use of jpg, jpeg format material should be avoided.

For information about Bmp format, please refer to

https://blog.csdn.net/BrookIcv/article/details/52713685

Jpg, jpeg related information please refer to:

https://blog.csdn.net/carson2005/article/details/7753499

2. Impact of interface design tool options on image formats

Our interface design tool has two modes when generating the files needed for GPU drawing: high quality mode and high frame rate mode. The following describes the impact of the two modes on the generated image format, i.e., the display effect.

High quality mode:

In high quality mode, most of the space material images will be converted to 32bits color depth bitmap (RGBA8888), the boot animation will be converted to 16bit color depth bitmap (RGB565), some special control texture according to the control material format into the above two formats: if the material format is bmp, it will be converted to (RGB565), the rest of the format will be converted to (RGBA8888). If the clip format is bmp, it will be converted to (RGB565), the rest will be converted to (RGBA8888). In this mode, the quality of the image is guaranteed as much as possible, which ensures that the interface drawn by the GPU is clear and consistent with the original design.

High frame rate mode:

High frame rate mode compresses the image data in high quality mode, the RGBA8888 format in high quality mode is compressed to DXT5, and the RGB565 format is compressed to DXT1. The DXT compression algorithm can greatly reduce the size of the file, but the DXT compression is lossy, it will blur the image at a certain level. for the DXT format, please refer to:

https://blog.csdn.net/lhc717/article/details/6802951.

High frame rate mode reduces the size of image data by DXT compression algorithm, which can greatly improve the speed of GPU drawing interface and increase the display frame rate.

Dithering image processing techniques:

The color depth of 24bit or 32bit startup animation control texture and bmp format special control texture material image will be converted to 16bit color depth RGB565 format by our interface design tool, which will inevitably reduce the quality of the image, in order to get a higher picture effect, the original material image will be Dithering processing, and then according to the processed results to generate the RGB565 format image. In order to get a higher image result, the original material image will be processed by Dithering, and then the image in RGB565 format will be generated according to the processing result. This is equivalent to the use of alternate interpolation of widely spaced colors to produce more accurate colors.

See Wikipedia for the principle https://en.wikipedia.org/wiki/Dither24bit

Tips:Color depth material images converted to 16bit color depth RGB565 on the border may appear fused border visual effect.

Correction for non-square pixels

Usually, the pixels on the screen are square, that is, its aspect ratio w:h=1:1, but there are also non-square pixels on the screen, this time the image needs to be corrected in some way, the following will be discussed in two cases, one is without rotation texture and the other is with rotation texture. The texture with rotation refers to the pointer texture of the dashboard and the rotation map texture, while the others are textures without rotation.

1. Correction of texture without rotation

The correction of texture without rotation is done by correcting the material. Normally, a 100x100 resolution image is still displayed as a square (i.e. a collection of 100x100 pixel squares) on a screen with square pixels, but on a screen with non-square pixels, assuming that the pixel aspect ratio is w:h=1:0.9, the image will be displayed as a rectangle (i.e. a collection of 100x100 pixel rectangles), which is different from the original 100x100 pixel rectangle (i.e. a collection of 100x100 pixel rectangles), which is the same as the original 100x100 pixel rectangle. ), which is inconsistent with the original 100x100 resolution image. In order to make the image can be displayed correctly on the screen of non-square pixels, you can extend the resolution of the original image to the best of its ability to expand to 100 x100/0.9 is about 100x111 resolution, so that the image in the pixel aspect ratio of 1:0.9 on the screen approximate to the original image (at this time, the image aspect ratio of 100: (111*0.9) is approximately equal to the original 100: 100). 100).

In summary, for a screen with non-square pixels, and assuming a pixel aspect ratio of w:h=1:ratio, a W x H resolution clip should be corrected to a W x H/(ratio) resolution clip in order to achieve the correct display. This correction is done by the user.

2. Correction with rotated texture

When the image is displayed with a rotating effect, it can no longer be correctly displayed on the screen with non-square pixels by correcting the non-resolution of the image, the embedded software will correct the operation matrix of the image to realize the correct display of the image on the screen with non-square pixels. The user does not need to correct this part of the texture image, just provide the original image.