27 Aralık 2023 Çarşamba

ROW LEVEL SECURITY - CREATE POLICY veya DROP POLICY

Giriş
Açıklaması şöyle
Row Level Security (RLS) lets you define access control policies at the row level within tables. This means that you can specify which users can SELECT, INSERT, UPDATE, or DELETE rows based on the actual data within the row.
Örnek
Elimizde şöyle bir tablo olsun
CREATE TABLE client (
    client_id PRIMARY KEY,     
    client_name VARCHAR(255) NOT NULL,
    email VARCHAR(255),               
    registration_date DATE DEFAULT CURRENT_DATE 
);
ROW LEVEL SECURITY kullanmak için şöyle yaparız
ALTER TABLE client ENABLE ROW LEVEL SECURITY;
ROW LEVEL SECURITY kullanmak için şöyle yaparız
DROP POLICY IF EXISTS client_isolation_policy ON client;
Sonra şöyle yaparız
CREATE POLICY client_isolation_policy on client
  USING (current_user =CONCAT('appuser_', client_id::text));
apppuser + client_id sütun değeri current_user ile aynı ise bu satır kullanıcıya gösterilir

Yeni bir kullanıcı yaratalım
create user appuser_1001 WITH ENCRYPTED PASSWORD 'password';

GRANT connect on database sample_db to appuser_1001;

GRANT USAGE ON SCHEMA public TO appuser_1001;

GRANT select on all tables in schema public to appuser_1001;

GRANT INSERT,UPDATE ON all tables in schema public TO appuser_1001;
root veya postgres kullanıcı etkilenmez. Her şeyi görebilir
select * from client c ;

client_id|client_name|email          |registration_date|
---------+-----------+---------------+-----------------+
     1001|Client 1001|abc@example.com|       2023-10-17|
     1002|Client 1002|abc@example.com|       2023-10-17|
appuser_1001 kullanıcısı sadece 1001 değerine sahip satırları görebilir.
select * from client c ;

client_id|client_name|email          |registration_date|
---------+-----------+---------------+-----------------+
     1001|Client 1001|abc@example.com|       2023-10-17|







Sütun Tipleri - ULID - Universally Unique Lexicographically Sortable Identifier

Bu tipi ilk olarak Performance of ULID and UUID in Postgres Database yazısında gördüm. Açıklaması şöyle
ULIDs contain a 48-bit timestamp followed by 80 bits of random data. The timestamp allows ULIDs to be sorted, which works much better with the b-tree data structure databases use for indexing.”
UUID performans kaybına sebep oluyor. Diğer seçenekler şöyle
ULIDs
Snowflake
Composite keys

25 Aralık 2023 Pazartesi

Checkpointing

Giriş
Checkpointing kısaca WAL dosyasındaki değişikliklerin gerçekten Data Page dosyasına uygulanması demek. 

Açıklaması şöyle
The question here can I control how data pages are flushed to disk? And the answer is yes, this is called as checkpointing, where a new WAL record is created called checkpoint record write after the entire data pages in memory are flushed to disk. This is a very delicate process, too much checkpointing sure makes start up faster, but it indeeds takes a toll on your I/O as checkpointing is IO heavy operation as we mentioned before data pages are very large.
Açıklaması şöyle. Page'lerin diske yazılması ve bu anın kaydedilmesi
Technically speaking we are consistent in memory because we do write to pages as we write the WAL and because we always read and write from and to memory first we get the latest and greatest. So data files and indexes pages are up to date in memory but they are out of date on disk.

So in a happy path, I issue a commit and my WAL is flushed successfully to disk and the database returns OK to the client (me) who issued the commit. So the WAL is updated. Then moments later the database start flushing my full pages to disk and now both data files and WAL are in sync on disk. We call this a checkpoint and the process that creates this checkpoint is checkpointing. The database will actually create a checkpoint record that says, I hereby declare at this moment in time, everything written to WAL on this moment is completely in SYNC with what we have on data files on disk.
Undo logs
Açıklaması şöyle. Checkpointing çökerse bellekteki kirli sayfaların bir kısmı diske yazılmış bir kısmı yazılmamış olacak. 
Ok. Here is another edge case for you, I flush my WAL and commit, a checkpointer process kicks in and start flushing dirty pages to disk (dirty pages are pages in memory that have been written to) and midway through checkpointing database crashed!

You now have half of the data pages updated and half of them are outdated with the WAL. The WAL only has changes, it has no clue what the original rows looked like. So for that we need another log to record the original state of the row before we made the change and that is the undo log.

Now with the undo log, the database first checks for transactions that started but didn’t commit and roll those back, any changes made by those transactions must be discarded from the WAL. Any changes that made it to the data files MUST be undone so we go back to consistent state where all rows in the data pages are trusted and committed. And finally we redo the WAL changes on those original pages to bring the pages back in sync with the WAL.

The undo logs are also used by transactions that need MVCC visibility to the old version of the row. This means having a long running transaction that does massive changes may slow down repeatable read/serializable isolation level read transactions as those have to go to the undo log and restore the original row before use.

Postgres clever architecture completely can get away without undo logs I explain it here.




14 Aralık 2023 Perşembe

FOR LOOP

Örnek - Batch Deletion
Şöyle yaparız
BEGIN; -- BEGIN A TRANASCATION

WITH batch_cursor AS (
  SELECT id FROM state_transitions
  ORDER BY id ASC
  LIMIT 10000 --- 10k record in a batch
)
FOR batch IN (SELECT * FROM batch_cursor)
LOOP
  DELETE FROM large_table
  WHERE id = batch.id;
END LOOP;

COMMIT;  -- COMMITING THE TRANSACTION
Açıklaması şöyle
With batch deletion, we’re gently removing a bunch of records in each go. This not only keeps our database humming but also reduces the risk of causing chaos. ALWAYS RUN THIS IN A TRANSACTION AND COMMIT IT TOGETHER

7 Aralık 2023 Perşembe

pg_partman

Giriş
Açıklaması şöyle
pg_partman is an extension for PostgreSQL that provides automatic partitioning of tables. 
...
pg_partman simplifies the process of managing partitioned tables by automating the creation and maintenance of partitions. It includes features to handle both time-based and serial-based partitioning. This extension is particularly useful for tables that grow over time and need to be partitioned based on date or other criteria.


Parallel Sequential Scan

Giriş
Sequential Scan gibidir. Sadece birden fazla worker tarafından gerçekleştirilir

Örnek
Şöyle yaparız. Tabloda Index olmadığı için Full Table Scan yapılıyor
CREATE TABLE t1 (
    a   int,
    b   int
);
test=# explain analyze select count(*) from t1 where a=9999;
                                                          QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=10633.65..10633.66 rows=1 width=8) (actual time=43.930..47.172 rows=1 loops=1)
   ->  Gather  (cost=10633.44..10633.65 rows=2 width=8) (actual time=43.766..47.164 rows=3 loops=1)
         Workers Planned: 2
         Workers Launched: 2
         ->  Partial Aggregate  (cost=9633.44..9633.45 rows=1 width=8) (actual time=34.394..34.395 rows=1 loops=3)
               ->  Parallel Seq Scan on t1  (cost=0.00..9633.33 rows=42 width=0) (actual time=34.373..34.385 rows=33 loops=3)
                     Filter: (a = 9999)
                     Rows Removed by Filter: 333300
 Planning Time: 0.313 ms
 Execution Time: 47.250 ms
(10 rows)

Pghero

Giriş
Açıklaması şöyle
Pghero is a performance dashboard and monitoring tool for PostgreSQL. It provides an easy-to-use web interface to track database metrics, query performance, and index usage. Pghero helps you identify slow queries, analyze database trends, and optimize your PostgreSQL database for better performance.

Autovacuum Daemon

Giriş
Açıklaması şöyle
Postgres automates the VACCUM processing using auto vacuum daemon. By default, it runs every 1 minute. When the VACCUM wakes up, it invokes three workers. These workers do the VACCUM processing on the target tables.

You can query pg_settings to check various configurations for the autovacuum process in Postgres:
Şöyle yaparız
SELECT
  name,
  setting,
  category,
  short_desc
FROM pg_settings
WHERE name LIKE '%autovacuum%'
autovacuum_vacuum_scale_factor Alanı
Açıklaması şöyle
Varsayılan değer 0.2. Tablodaki satırların %20'sı MVCC yüzünden atıl hale gelirse auto vacuum başlar. Yazma işlemi daha fazla olan tablolarda autovacuum_vacuum_scale_factor daha düşük bir değer olabilir
Örnek
Şöyle yaparız
ALTER TABLE SampleTable2 SET
(autovacuum_vacuum_scale_factor = 0, autovacuum_vacuum_threshold = 100)
Açıklaması şöyle
 ... we set autovacuum for SampleTable2 if it has more than 100 DEAD tuples.

4 Aralık 2023 Pazartesi

Soft Delete

Giriş
Açıklaması şöyle
When using the soft delete mechanism on the database, you might run into a situation where a record with a unique constraint was deleted and you want to reuse the unique value.
Örnek
Elimizde şöyle bir tablo olsun
CREATE TABLE users (
  user_id UUID PRIMARY KEY DEFAULT uuid_generate_v7(),
  email TEXT NOT NULL,
  created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
  updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
  deleted_at TIMESTAMP WITH TIME ZONE,
  created_by UUID NOT NULL,
  updated_by UUID NOT NULL,
  FOREIGN KEY (created_by) REFERENCES users(user_id),
  FOREIGN KEY (updated_by) REFERENCES users(user_id),
  is_system SMALLINT DEFAULT 0 NOT NULL CHECK (is_system BETWEEN 0 AND 1)
);
Açıklaması şöyle
Whether a row should be created that has the same natural key as a ‘deleted’ row depends on the table. If this is desirable, unique indexes should be written with a WHERE clause, for example:
Şöyle yaparız. Böylece aynı email değerine sahip başka kullanıcı yaratılabilir
CREATE UNIQUE INDEX “users_email_unique”
  ON users(email, deleted_at)
  WHERE deleted_at IS NULL;
veya şöyle yaparız. Burada email alanı case insensitive olarak düşünüldüğü için case insensitive collation kullanılıyor
CREATE UNIQUE INDEX users_email_unique
  ON users(email COLLATE case_insensitive)
  WHERE deleted_at IS NULL;


3 Aralık 2023 Pazar

CREATE TEMPORARY TABLE

Giriş
TEMPORARY TABLE iki şekilde silinebilir
1. DELETE ALL
2. TRUNCATE

DELETE ALL zaten iyi bir seçenek değil.
TRUNCATE için açıklama şöyle
TRUNCATE simply creates a new file on disk and does an UPDATE of the pg_class table.
Yani 
1. gp_class sitem tablosunun dolmasına sebep oluyor
2. Disk I/O yapılması gerekiyor

Çözüm 1
Açıklaması şöyle
In Linux, there is one approach that can significantly reduce disk usage of temporary tables. It is to allocate a separate RAM disk for temporary tables.
Bunun için temp_tablespaces seçeneği kullanılıyor
Örnek
Şöyle yaparız
mkdir /mnt/dbtemp
chmod 700 /mnt/dbtemp
chown postgres:postgres /mnt/dbtemp
Daha sonra SQL ile şöyle yaparız. Burada belirtilen dizinin kullanana bir  tablespace yaratılıyor. /mnt/dbtemp altında örneğin /mnt/dbtemp/PG_13_202007201/ diye yeni bir dizin yaratılıyor
CREATE TABLESPACE temp LOCATION '/mnt/dbtemp';
Sonra şöyle yaparız. Böylece örneğin /mnt/dbtemp/PG_13_202007201/936082 diye yeni bir dizin yaratılıyor
CREATE TEMPORARY TABLE mytemptable (key0 int8) TABLESPACE temp;
/etc/fstab dosyasına şöyle yaparız. Bir dahaki tekrar başlatmada bu dizin otomatik yüklenir.
tmpfs /mnt/dbtemp/PG_13_202007201/936082 tmpfs \ 
  rw,nodev,nosuid,noatime,nodiratime,size=1G 0 0
Ama bir dahaki başlatmayı beklememek için dizini RAM'e yükleriz
mount /mnt/dbtemp/PG_13_202007201/936082
Sonra postgresql.conf  dosyasına şöyle yaparız
temp_tablespaces = 'temp'
ve uygulamak için şöyle yaparız
SELECT pg_reload_conf();






23 Kasım 2023 Perşembe

PG_DATABASE Sistem Tablosu

Giriş
Tüm veri tabanı isimleri bu tablodadır

Örnek
Şöyle yaparız
SELECT datname FROM pg_database WHERE datname LIKE '${inpDBPattern}' ORDER BY 1 OFFSET ${inpOffset} LIMIT ${inpLimit}

PG_STAT_DATABASE Sistem Tablosu

Giriş
Açıklaması şöyle
... you can use the pg_stat_database view to retrieve information on database-level activity, such as the number of connections, transactions, and queries per second
Bu tablosu temizlemek için şöyle yaparız
SELECT pg_stat_reset(); 

15 Kasım 2023 Çarşamba

PG_SETTINGS Sistem Tablosu

Örnek
Şöyle yaparız
SELECT name, setting
FROM pg_settings WHERE name = ‘effective_io_concurrency’;
Örnek
Şöyle yaparız
SELECT
  name,
  setting,
  category,
  short_desc
FROM pg_settings
WHERE name LIKE '%autovacuum%'
Örnek
Şöyle yaparız
SELECT name, setting, boot_val FROM pg_settings WHERE name LIKE "autovacuum%";

PG_STAT_BGWRITER Sistem Tablosu - View To Monitor The Overall I/O Activity Of The Database

Örnek
Şöyle yaparız
SELECT checkpoints_timed, checkpoints_req, buffers_checkpoint, buffers_clean, maxwritten_clean, buffers_backend, buffers_alloc
FROM pg_stat_bgwriter;
Açıklaması şöyle
This query returns a single row with several columns that provide information about the background writer process. The columns show the number of checkpoints that have been performed, the number of buffers written during checkpoints, the number of buffers cleaned by the background writer, the maximum number of buffers written in a single clean pass, and the number of buffers allocated and used by backends.

PG_STATIO_USER_TABLES Sistem Tablosu - View to monitor disk I/O Activity For Individual Tables

Örnek
Şöyle yaparız
SELECT relname, heap_blks_read, heap_blks_hit, idx_blks_read, idx_blks_hit
FROM pg_statio_user_tables;
Açıklaması şöyle
This query returns a row for each table in the database, with columns for the number of blocks read and hit for both the table and its indexes. A high ratio of “hit” to “read” indicates that data is being retrieved from the cache, which is typically a good sign of good performance.

LOWER - Case-Insensitive İşlem İçin Kullanılabilir

Giriş
Açıklaması şöyle
To perform case-insensitive queries, use ILIKE, regular expressions, or the lower() function on both sides of the comparison operator. If lower() is used frequently, the column should be indexed using the case_insensitive collation since otherwise, the index will not be used. 
LOWER ve Index İlişkisi
Açıklaması şöyle
When the Lower function is used, the indexes on the relevant column become unusable. Each search will be done as a “sequential scan.” This will cause performance problems as the number of rows increases. “Expressional Index” is recommended as a solution. For details, see https://www.postgresql.org/docs/9.1/indexes-expressional.html.
Örnek - Unique Index With LOWER
Açıklaması şöyle
For example, a common way to do case-insensitive comparisons is to use the lower function:

SELECT * FROM test1 WHERE lower(col1) = 'value';

This query can use an index if one has been defined on the result of the lower(col1) function:

CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1));

If we were to declare this index UNIQUE, it would prevent creation of rows whose col1 values differ only in case, as well as rows whose col1 values are actually identical. Thus, indexes on expressions can be used to enforce constraints that are not definable as simple unique constraints.
Örnek - Case Insensitive Unique Index With Collation
Açıklaması şöyle
If lower() is used frequently, the column should be indexed using the case_insensitive collation since otherwise, the index will not be used. 
Şöyle yaparız
/*See https://www.postgresql.org/docs/current/collation.html#COLLATION-NONDETERMINISTIC */
CREATE COLLATION case_insensitive
(provider = icu, locale='und-u-ks-level2', deterministic=false);
Sonra şöyle yaparız
CREATE UNIQUE INDEX users_email_unique ON users(email COLLATE case_insensitive);
İndeks'in kullanıldığını görmek için şöyle yaparız
EXPLAIN
SELECT * FROM users
WHERE lower(email) = lower('Seed-system-user')
AND deleted_at IS NULL;
Çıktı şöyle olmalı
-> Bitmap Index Scan on users_email_unique

Kullanım
Örnek
Şöyle yaparız
SELECT * FROM test1 WHERE lower(col1) = lower('value');

13 Kasım 2023 Pazartesi

Bitmap Index Scan ve Bitmap Heap Scan

Giriş
Açıklaması şöyle
.. a Bitmap Heap Scan always works in tandem with Bitmap Index Scan to speed things up.
Açıklaması şöyle
Bitmap Index Scan scans the index first to create a bitmap of which pages need to be fetched, and Bitmap Heap Scan then uses the bitmap to fetch the data from the pages.

Neden Lazım
Açıklaması şöyle. Yani elimizde bir indeks varsa ancak sıralı değilse - örneğin indeks text içinse - kullanılır.
.... index scans cause random I/O if there is no ordering to the rows (name is text content). This is costly in rotational hard drives. To solve this, the Planner takes a two-stage approach. The Bitmap Index Scan constructs a data structure called a bitmap from the index present and represents the approximate location of pages in the disk and feeds it to the parent node, i.e., Bitmap Heap Scan, which then uses that to fetch the pages/rows.
Örnek
Şöyle yaparız. Burada name alanı için indeks yaratılıyor
CREATE INDEX name_str_idx ON fake_data USING BTREE(name);
Sorgu için şöyle yaparız
EXPLAIN ANALYZE SELECT * FROM fake_data WHERE 
 fake_data.name = 'David';
Örnek
Şöyle yaparız
CREATE INDEX idx weather_type ON weather(event_type); 

Bitmap Heap Scan on weather (cost=4.36..37.56 rows=11 width=622) (actual time=0.500..2.300 rows=11 loops=1)
Recheck Cond: ((event type)::text = 'Winter Storm'::text)
Heap Blocks: exact=6
-> Bitmap Index Scan on idx weather type (cost=0.00..4.36 rows=11] width =0) (actual time=0.400..0.400 rows=11 loops=1)
    Index Cond: ((event type)::text = 'Winter Storm'::text)
Planning Time: 23.300 ms
Execution Time: 7.200 ms
(7 rows)

Örnek - Bitmap Or
Eğer iki tane indeks varsa kullanılır. Elimizde şu anda id ve name için iki tane indeks var. 
Sorgu için şöyle yaparız
EXPLAIN ANALYZE SELECT * FROM fake_data WHERE 
 fake_data.id = 1000 OR fake_data.name = 'David';
Örnek - Bitmap And
Eğer iki tane indeks varsa kullanılır. Elimizde şu anda id ve name için iki tane indeks var. 
Sorgu için şöyle yaparız
EXPLAIN ANALYZE SELECT * FROM fake_data WHERE 
 fake_data.id <= 1000 OR fake_data.name = 'David';
5. Parallel Scans
Açıklaması şöyle
Sequential Scans are the slowest of all the plans we have seen so far because there is nothing to optimize there. The Planner goes over the data sequentially and tries to find the result. PostgreSQL optimizes this further by adding parallelism in the queries.







NESTED LOOP Scan

Giriş
Scan çeşitleri tek bir tablo varsa kullanılır. Eğer bir işlemde - örneğin JOIN - iki tane tablo varsa dışarıda bir NESTED LOOP içeride de Scan çeşitlerinden birkaç tanesi kullanılır

Örnek
Elimizde şöyle bir SQL olsun. Burada customer_id = 5 olan müşterini siparişleri sayılıyor
EXPLAIN ANALYZE SELECT count(*)
FROM orders AS o
 JOIN order_items as oi
  ON o.order_id = oi.order_item_order_id
WHERE o.order_customer_id = 5;
Çıktısı şöyle. Burada dışarıdaki döngü için Bitmap Index Scan ve Bitmap Heap Scan kullanılıyor. İçerideki döngü için Index Only Scan kullanılıyor
|QUERY PLAN                                                                                                                                                         |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|Aggregate  (cost=53.71..53.72 rows=1 width=8) (actual time=2.434..2.435 rows=1 loops=1)                                                                            |
|  ->  Nested Loop  (cost=4.76..53.67 rows=15 width=0) (actual time=1.872..2.428 rows=7 loops=1)                                                                    |
|        ->  Bitmap Heap Scan on orders o  (cost=4.34..26.49 rows=6 width=4) (actual time=1.034..1.315 rows=4 loops=1)                                              |
|              Recheck Cond: (order_customer_id = 5)                                                                                                                |
|              Heap Blocks: exact=4                                                                                                                                 |
|              ->  Bitmap Index Scan on orders_order_customer_id_idx  (cost=0.00..4.34 rows=6 width=0) (actual time=0.696..0.696 rows=4 loops=1)                    |
|                    Index Cond: (order_customer_id = 5)                                                                                                            |
|        ->  Index Only Scan using order_items_order_item_order_id_idx on order_items oi  (cost=0.42..4.49 rows=4 width=4) (actual time=0.275..0.276 rows=2 loops=4)|
|              Index Cond: (order_item_order_id = o.order_id)                                                                                                       |
|              Heap Fetches: 0                                                                                                                                      |
|Planning Time: 5.706 ms                                                                                                                                            |
|Execution Time: 2.565 ms      
Örnek - GroupAggregate  
GROUP BY varsa kullanılır Elimizde şöyle bir SQL olsun. Siparişleri gruplar ve toplam ciroyu verir.
EXPLAIN SELECT o.*, round(sum(oi.order_item_subtotal)::numeric, 2) as revenue FROM orders as o JOIN order_items as oi ON o.order_id = oi.order_item_order_id WHERE o.order_id = 2 GROUP BY o.order_id, o.order_date, o.order_customer_id, o.order_status;
Çıktı şöyle
|order_id|order_date |order_customer_id|order_status |revenue| |--------|-----------------------|-----------------|---------------|-------| |2 |2013-07-25 00:00:00.000|256 |PENDING_PAYMENT|579.98 |
Plan şöyle. Bu sefer Nested Loop dışında bir tane de GroupAggregate var
|QUERY PLAN                                                                             |
|---------------------------------------------------------------------------------------|
|GroupAggregate  (cost=0.29..3427.86 rows=1 width=58)                                   |
|  Group Key: o.order_id                                                                |
|  ->  Nested Loop  (cost=0.29..3427.82 rows=4 width=34)                                |
|        ->  Index Scan using orders_pkey on orders o  (cost=0.29..8.31 rows=1 width=26)|
|              Index Cond: (order_id = 2)                                               |
|        ->  Seq Scan on order_items oi  (cost=0.00..3419.47 rows=4 width=12)           |
|              Filter: (order_item_order_id = 2)            




12 Kasım 2023 Pazar

pg_ctl komutu

Giriş
Açıklaması şöyle
Ubuntu/Debian packages for Postgres have their own layer on top of initdb and pg_ctl to control multiple instances and the integration with systemd.
restart
Örnek
Şöyle yaparız
pg_ctl -D /pgsql/data/replica/ restart

25 Ekim 2023 Çarşamba

Logical Replication

Giriş
Açıklaması şöyle
- PostgreSQL 16 introduces the ability to perform logical replication from a standby instance, enabling the distribution of workloads by using standby to replicate changes to downstream systems.

- Performance enhancements in this release for logical replication include parallel processing for large transactions, the use of B-tree indexes for tables without primary keys, and faster initial table synchronization in binary format.

- Access control improvements in this release encompass the addition of the predefined role “pg_create_subscription” granting users the capability to create logical subscriptions, and the beginning of bidirectional logical replication support, enabling data replication between tables from different publishers.

23 Ekim 2023 Pazartesi

pgBadger - Log Analyzer

Giriş
PostgreSQL log dosyalarını inceleyerek performans iyileştirmesi için bir rapor hazırlar. Açıklaması şöyle
pgBadger is a powerful log analyzer specifically designed for PostgreSQL logs. It parses the log files and generates detailed reports, including query execution time, slowest queries, and usage patterns. This tool simplifies the process of identifying performance issues and optimizing queries.



HypoPG - Hypothetical Indexes

Giriş
Açıklaması şöyle
We all know how indexes can optimize query plans if used carefully. Since index creation requires resources, it would have been nice to have a way to know beforehand whether postgres will use those indexes or not.

HypoPg, a postgres extension lets you do just that. It can be used to create a hypothetical index without costing you resources and enables you to analyse the index performance.
Kurulum
Şöyle yaparız
CREATE EXTENSION hypopg;
hypopg_create_index metodu
Şöyle yaparızSequence Scan veya Index Scan kullanıldığını gösterir
SELECT * FROM hypopg_create_index('CREATE INDEX ON mytable (id)');
EXPLAIN SELECT * FROM mytable WHERE id = 1;
Diğer
Açıklaması şöyle
hypopg_reset() — to clear the previous effects of other indexes
hypopg_drop_index(indexrelid) — to remove your backend’s hypothetical indexes
hypopg_hide_index(oid) — to hide an index
hypopg_unhide_index(oid) — to restore the index