Şöyle yaparız
pgloader mysql://user:pass@10.10.10.10:3306/zabbix \postgresql://user:pass@10.10.10.1:31320/zabbix
pgloader mysql://user:pass@10.10.10.10:3306/zabbix \postgresql://user:pass@10.10.10.1:31320/zabbix
Açıklaması şöylean extension that transforms materialized views into self-updating, always-fresh data structures that update instantly as your base data changes.
Under the Hood: How pg_ivm WorksWhen you create an incremental materialized view, pg_ivm automatically:1. Installs triggers on all referenced base tables2. Captures changes through INSERT/UPDATE/DELETE monitoring3. Calculates deltas to determine the minimal impact4. Updates the view with only the necessary changes
The Good: Allows queries during refresh (requires a unique index)The Bad: Still recomputes the entire view from scratchThe Reality: Better UX, but still expensive and slow for large datasets.
REFRESH MATERIALIZED VIEW CONCURRENTLY sales_summary;
The Good: Simple and straightforwardThe Bad: Locks the entire view during refresh, blocking all queriesThe Reality: Your users stare at loading spinners while the view rebuilds
REFRESH MATERIALIZED VIEW sales_summary;
CREATE OR REPLACE FUNCTION create_journal_table()RETURNS event_triggerLANGUAGE plpgsqlAS $$DECLAREobj record;schema_name text;table_name text;journal_table_name text;BEGINFOR obj INSELECT * FROM pg_event_trigger_ddl_commands()LOOPIF obj.command_tag = 'CREATE TABLE' AND obj.object_type = 'table' THENschema_name := split_part(obj.object_identity, '.', 1);table_name := split_part(obj.object_identity, '.', 2);schema_name := trim(both '"' from schema_name);table_name := trim(both '"' from table_name);-- Skip journal tables to avoid recursionIF right(table_name, 8) = '_journal' THENCONTINUE;END IF;journal_table_name := format('%I.%I_journal', schema_name, table_name);EXECUTE format('CREATE TABLE IF NOT EXISTS %s (id bigserial PRIMARY KEY,action text NOT NULL,changed_at timestamptz DEFAULT now(),old_data jsonb,new_data jsonb)', journal_table_name);EXECUTE format('CREATE TRIGGER %I_to_journalAFTER INSERT OR UPDATE OR DELETE ON %I.%IFOR EACH ROWEXECUTE FUNCTION log_to_journal()',table_name, schema_name, table_name);END IF;END LOOP;END;$$;
CREATE EVENT TRIGGER create_journal_after_table
ON ddl_command_end
WHEN TAG IN ('CREATE TABLE')
EXECUTE FUNCTION create_journal_table();CREATE OR REPLACE FUNCTION log_to_journal()RETURNS triggerLANGUAGE plpgsqlAS $$DECLAREjournal_table text;BEGINjournal_table := format('%I.%I_journal', TG_TABLE_SCHEMA, TG_TABLE_NAME);IF TG_OP = 'INSERT' THENEXECUTE format('INSERT INTO %s (action, old_data, new_data) VALUES ($1, $2::jsonb, $3::jsonb)',journal_table)USING TG_OP, NULL, to_jsonb(NEW);ELSIF TG_OP = 'UPDATE' THENEXECUTE format('INSERT INTO %s (action, old_data, new_data) VALUES ($1, $2::jsonb, $3::jsonb)',journal_table)USING TG_OP, to_jsonb(OLD), to_jsonb(NEW);ELSIF TG_OP = 'DELETE' THENEXECUTE format('INSERT INTO %s (action, old_data, new_data) VALUES ($1, $2::jsonb, $3::jsonb)',journal_table)USING TG_OP, to_jsonb(OLD), NULL;END IF;RETURN NULL; -- AFTER triggerEND;$$;
Use JSON_TABLE() to transform JSON data into tabular format.
SELECT *FROM JSON_TABLE('[{"name":"Alice","age":30},{"name":"Bob","age":25}]','$[*]'COLUMNS(name TEXT PATH '$.name', age INT PATH '$.age')) AS jt;
You can now use it MERGE ... RETURNING to retrieve rows that were inserted, updated, or deleted—all in one go.
MERGE INTO employees AS eUSING new_data AS dON e.id = d.idWHEN MATCHED THENUPDATE SET salary = d.salaryWHEN NOT MATCHED THENINSERT (id, salary) VALUES (d.id, d.salary)RETURNING *;
log_min_duration_statement: just logging the statements whose duration is more than this value in milliseconds. This is very useful for detecting slow queries.
It sets the minimum execution time in milliseconds (ms) above which all statements will be logged.The default value for the log_min_duration_statement parameter is -1, which disables logging statements.Setting the PostgreSQL parameter log_min_duration_statement to 0 will print all statements' durations.
# set log_min_duration_statement=1;SET # show log_min_duration_statement; log_min_duration_statement ---------------------------- 1ms (1 row)
LOG: duration: 5.477 ms statement: insert into abc values(1); LOG: duration: 8.656 ms statement: insert into abc values(2);
Most fintech companies, who are required to log all user and application activities, set the most verbose option log_statement=all. This is the easiest way to bloat the log files and invite storage issues if storage consumption is not monitored. Pg_audit can be a smarter way of logging user activities, where you can specify which class of activities you want to log, such as READ, WRITE, DDL, or FUNCTION.
PostgreSQL supports several methods for logging server messages, including stderr (default), csvlog, and syslog. With this release, jsonlog is also supported which is convenient for exporting logs and debugging. In order to enable this, add the jsonlog under the log_destination inside the postgresql.conf file.
This will be a nice feature to have with exports to logging and monitoring tools like HoneyComb and DataDog etc.
> create table main_table_historical_dataas select * from main_table where create_date < '01-Jan-2020';> delete from main_table where create_date < '01-Jan-2020';