fbpx

The golden rules for Amazon Redshift optimization

Customize Workload Management

Workload management in Redshift means optimizing the queues. This includes the correct selection of queues, the assignment of queries and the allocation of memory.

Although automated workload management and concurrency scaling have greatly improved workload management, workload management is a critical point in ensuring a scalable system (and one of the weak points of Redshift)

Since Redshift Workload Management is primarily based on queuing queries, very unstable runtimes can be expected if configured incorrectly. Consider switching from manual WLM to automatic WLM, in which queues and their queries can be prioritized.

Consider whether concurrency scaling (for read-only workload) or short query acceleration makes sense in your environment.

Use Column Encoding for compression

Consider compressing columns. This reduces the storage space required and also reduces the I/Os required by queries.

Do not use Redshift for OLTP queries

Redshift is completely unsuitable for this kind of workload. Not only because it is a column store, but also because there are no indexes that can be used (as in Teradata columnar).

Choose the right method to distribute the data

Use the DISTKEY to help Amazon Redshift join large tables.
Use EVEN if no suitable DISTKEY exists to avoid skew.
Copy lookups to all slices.

Maintenance of your Amazon Redshift statistics

Only if the statistics are correct will memory be reserved in the correct size for the query plan created.
If too much memory is reserved, the other queries in the same queue are missing and are delayed.
If too little memory is reserved, it is possible that the memory must be buffered. The query will then be much slower.
Execute the ANALYZE command regularly.

Release space regularly

As Rows’ space in Amazon Redshift is not automatically freed after a DELETE or UPDATE this must be done manually. This is done with the VACUUM FULL command. Space is released and the rows are sorted by SORTKEY. This is regularly necessary to maintain performance

Write better queries

Here are two examples:
Since we are dealing with a column store you should always select only the required columns to minimize the I/Os.
Also, whenever possible, restrict your queries with a WHERE condition to minimize the scans.

__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"62516":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"62516":{"val":"var(--tcb-skin-color-0)"}},"gradients":[]},"original":{"colors":{"62516":{"val":"rgb(19, 114, 211)","hsl":{"h":210,"s":0.83,"l":0.45}}},"gradients":[]}}]}__CONFIG_colors_palette__
__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"b4fbe":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"b4fbe":{"val":"rgb(241, 99, 52)"}},"gradients":[]},"original":{"colors":{"b4fbe":{"val":"rgb(19, 114, 211)","hsl":{"h":210,"s":0.83,"l":0.45}}},"gradients":[]}}]}__CONFIG_colors_palette__
Previous Article
__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"b4fbe":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"b4fbe":{"val":"rgb(241, 99, 52)"}},"gradients":[]},"original":{"colors":{"b4fbe":{"val":"rgb(19, 114, 211)","hsl":{"h":210,"s":0.83,"l":0.45}}},"gradients":[]}}]}__CONFIG_colors_palette__
Next Article
__CONFIG_colors_palette__{"active_palette":0,"config":{"colors":{"62516":{"name":"Main Accent","parent":-1}},"gradients":[]},"palettes":[{"name":"Default Palette","value":{"colors":{"62516":{"val":"rgb(255, 0, 0)"}},"gradients":[]}}]}__CONFIG_colors_palette__
GET OUR TERADATA BOOK

DWH Pro Admin

Roland Wenzlofsky is an experienced freelance Teradata Consultant & Performance Trainer. Born in Austria's capital Vienna, he is building and tuning some of the largest Teradata Data Warehouses in the European financial and telecommunication sectors for more than 20 years. He has played all the roles of developer, designer, business analyst, and project manager. Therefore, he knows like no other the problems and obstacles that make many data warehouse projects fail and all the tricks and tips that will help you succeed.

  • The advice in this post is substantially out of date.
    1. Use AutoWLM with query priorities
    2. Compression is applied automatically
    3. {correct!}
    4. Use DISTKEY if possible, use DISTSTYLE AUTO otherwise
    5. Redshift automatically runs ANALYZE in the background
    6. Redshift automatically runs VACUUM DELETE.
    7. {correct!}

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

    You might also like

    >