Sql of note: Difference between revisions
From Federal Burro of Information
Jump to navigationJump to search
No edit summary |
|||
Line 138: | Line 138: | ||
* [[/Pivoting in sql]] | * [[/Pivoting in sql]] | ||
* [[Postgres Notes]] | * [[Postgres Notes]] | ||
[[Category:SQL]] | |||
[[Category:DATA]] |
Latest revision as of 16:17, 12 December 2018
Example of Time bucket
select from_unixtime(timestamp-timestamp%3600)as time, count(*) from table where data like 'Destination directory %does not exist at%' and FROM_UNIXTIME(timestamp) > subdate(now(),interval 4 day) group by timestamp-timestamp%3600 order by time;
sqlite
.tables
.schema
epoch:
SELECT datetime(1319017136629, 'unixepoch', 'localtime');
Date / Time
These all return dates and times.
Last day of this month:
SELECT date('now','start of month','+1 month','-1 day');
the first day of this monnth:
SELECT date('now','start of month');
first day of last month:
SELECT date('now','start of month','-1 month');
last day of last month:
SELECT date('now','start of month','-1 day');
get epoch:
select strftime( '%s' , "2012-09-30") ;
convert from epoch to date:
SELECT datetime( 1092941466 , 'unixepoch');
convert back and forth:
SELECT datetime( strftime( '%s' , "2012-09-30") , 'unixepoch');
X in Y
SELECT filename FROM source WHERE hash NOT IN (SELECT hash FROM destination);
Also see
transpose wide tables
https://www.periscopedata.com/blog/using-row-numbering-and-full-joins-to-transpose-wide-tables
grouping
Redshift:
https://docs.aws.amazon.com/redshift/latest/dg/r_Examples_of_avg_WF.html
multiple agregates
https://blog.jooq.org/2017/04/20/how-to-calculate-multiple-aggregate-functions-in-a-single-query/
portion with values
given table:
id | started_at | completed_at ------------------------------ 1 | 2018 01 01 | 2018 01 03 2 | 2018 01 02 | 3 | 2018 01 03 | 2018 01 10 4 | 2018 01 04 | 2018 01 06 5 | 2018 01 05 | 6 | 2018 01 06 | 2018 01 13
What percent were completed?
this give ints:
select count(g.id) , count(distinct g.completed_at ) from g;
cast and round:
select cast( cast( count(distinct g.completed_at) as decimal) / cast( count(g.id) as decimal) as decimal ( 9,2 ) ) from g;
generate a sequence
select a.a as a, b.a as b , c.a as c , CAST(a.a AS INTEGER ) as aint, CAST(b.a AS INTEGER ) as bint, CAST(c.a AS INTEGER ) AS cint, CAST(a.a AS INTEGER ) * 100 + CAST(b.a AS INTEGER ) * 10 + CAST(c.a AS INTEGER ) AS sum from (select 0 as a union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) as a cross join (select 0 as a union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) as b cross join (select 0 as a union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) as c ;
wow that's coplicated, we are postgres s lets try something simpler:
SELECT a.n from generate_series(1, 100) as a(n), generate_series(1, 3)