Postgres Notes: Difference between revisions

From Federal Burro of Information
Jump to navigationJump to search
No edit summary
(5 intermediate revisions by the same user not shown)
Line 64: Line 64:
== vacuum ==
== vacuum ==


=== Global ===
=== Global settings ===


  select * from pg_settings where category like 'Autovacuum';
  select * from pg_settings where category like 'Autovacuum';
Line 77: Line 77:
</pre>
</pre>


=== disk usage ===
<pre>
SELECT
      relname AS "table_name",
      pg_size_pretty(pg_table_size(C.oid)) AS "table_size"
FROM
      pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema') AND nspname !~ '^pg_toast' AND relkind IN ('r')
ORDER BY pg_table_size(C.oid)
DESC LIMIT 20;
</pre>


=== stats ===
=== stats ===
Line 92: Line 106:
   last_autovacuum ASC;
   last_autovacuum ASC;
</pre>
</pre>
also see: https://www.datadoghq.com/blog/postgresql-vacuum-monitoring/
== Dead tuples report ==
SELECT relname, n_dead_tup FROM pg_stat_user_tables order by n_dead_tup desc limit 30;


== Disk usage ==
== Disk usage ==
Line 208: Line 228:
LIMIT  1000;  
LIMIT  1000;  
</pre>
</pre>
== Also See ==
* [[Sql of note]]
[[Category:DATA]]
[[Category:SQL]]

Revision as of 16:18, 12 December 2018

Wide outout

Good for seeing examples of FEW records:

\x
select * from users limit 1;

An App user

Adding a secure , least priviledges user for you app, no root.


GRANT USAGE ON SCHEMA public TO myuser; -- more schemas?
GRANT SELECT ON ALL TABLES IN SCHEMA public TO myuser;

This ensure taht all subsequent tables are accessable by that user ( role ):

ALTER DEFAULT PRIVILEGES FOR ROLE mycreating_user IN SCHEMA public
GRANT SELECT ON TABLES TO myuser;

ref: https://dba.stackexchange.com/questions/91953/grant-access-to-all-tables-of-a-database

full treatment: https://stackoverflow.com/questions/41537825/restricted-postgresql-permissions-for-web-app/47955583#47955583

RDS Gochas

Grant the master user to the role you want to grant others ( RDS only )

https://stackoverflow.com/questions/26684643/error-must-be-member-of-role-when-creating-schema-in-postgresql

per table cache hit ratios

SELECT 
  relname as table , 
  heap_blks_read as heap_read,
  heap_blks_hit as heap_hit,
  to_char( ( heap_blks_hit::decimal / ( heap_blks_hit::decimal + heap_blks_read::decimal ) ) * 100 , '99.99' )
FROM 
  pg_statio_user_tables
WHERE 
  heap_blks_hit + heap_blks_read > 0
  AND relid > 10000
ORDER BY relname;

dont forget to select pg_stat_reset();

percent of time index used

SELECT 
  relname, 
  100 * idx_scan / (seq_scan + idx_scan) percent_of_times_index_used, 
  n_live_tup rows_in_table
FROM 
  pg_stat_user_tables
WHERE 
    seq_scan + idx_scan > 0 
ORDER BY 
  percent_of_times_index_used ASC;

vacuum

Global settings

select * from pg_settings where category like 'Autovacuum';

per table settings

select relname, reloptions, pg_namespace.nspname
from pg_class
join pg_namespace on pg_namespace.oid = pg_class.relnamespace
where relname like 'data%' and pg_namespace.nspname = 'public';


disk usage

SELECT 
       relname AS "table_name", 
       pg_size_pretty(pg_table_size(C.oid)) AS "table_size" 
FROM 
       pg_class C 
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) 
WHERE nspname NOT IN ('pg_catalog', 'information_schema') AND nspname !~ '^pg_toast' AND relkind IN ('r') 
ORDER BY pg_table_size(C.oid) 
DESC LIMIT 20;

stats

SELECT 
  relname, 
  last_vacuum,
  last_autovacuum,
  vacuum_count,
  autovacuum_count
FROM 
  pg_stat_user_tables
ORDER BY 
  last_autovacuum ASC;

also see: https://www.datadoghq.com/blog/postgresql-vacuum-monitoring/

Dead tuples report

SELECT relname, n_dead_tup FROM pg_stat_user_tables order by n_dead_tup desc limit 30;

Disk usage

https://wiki.postgresql.org/wiki/Disk_Usage

SELECT *, pg_size_pretty(total_bytes) AS total
    , pg_size_pretty(index_bytes) AS INDEX
    , pg_size_pretty(toast_bytes) AS toast
    , pg_size_pretty(table_bytes) AS TABLE
  FROM (
  SELECT *, total_bytes-index_bytes-COALESCE(toast_bytes,0) AS table_bytes FROM (
      SELECT c.oid,nspname AS table_schema, relname AS TABLE_NAME
              , c.reltuples AS row_estimate
              , pg_total_relation_size(c.oid) AS total_bytes
              , pg_indexes_size(c.oid) AS index_bytes
              , pg_total_relation_size(reltoastrelid) AS toast_bytes
          FROM pg_class c
          LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
          WHERE relkind = 'r'
  ) a
) a;

cache hit ration summary

SELECT 
  sum(heap_blks_read) as heap_read,
  sum(heap_blks_hit)  as heap_hit,
  sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
FROM 
  pg_statio_user_tables;

best to reset counters some where in there.

pg_stat_reset()

Enable Extensions

In AWS RDS Postgres check that the extention you want is already in the param group.

Name: rds.extensions Values: ... big long string...

then check the system var:

SHOW rds.extensions;

Then enable it:

CREATE EXTENSION IF NOT EXISTS tablefunc;

Tablefunc gives you the awesome crosstabN() funtion.

References:

Pivot table in postgres?

use crosstabN ...

Random data

Generate

reference:

Random date:

SELECT date_trunc('day', date '2014-01-10' + random() * interval '3 days' )::timestamp::date as day, seq, u.id AS user_id
  FROM GENERATE_SERIES(1, 50) seq, users u;
SELECT TO_CHAR(day, 'YYYY-MM-DD'), random() FROM generate_series
        ( '2017-02-01'::date
        , '2017-04-01'::date
        , '1 day'::interval) day


SELECT generate_series(1,10) AS id, md5(random()::text) AS descr;

Select

examine the shape of your ID:

SELECT count(*) AS ct              -- optional
     , min(id)  AS min_id
     , max(id)  AS max_id
     , max(id) - min(id) AS id_span
FROM mytable ;
<pre>

let us suppose that your span is 5100000 and that you distribution of id is dense

<pre>
WITH params AS (
    SELECT 1       AS min_id           -- minimum id <= current min id
         , 5100000 AS id_span          -- rounded up. (max_id - min_id + buffer)
    )
SELECT *
FROM  (
    SELECT p.min_id + trunc(random() * p.id_span)::integer AS id
    FROM   params p
          ,generate_series(1, 1100) g  -- 1000 + buffer
    GROUP  BY 1                        -- trim duplicates
    ) r
JOIN   big USING (id)
LIMIT  1000; 

Also See