aboutuniversal database and analytics why: 100 times faster than spark, redshift, bigquery and snowflake. who: arthur whitney, janet lustgarten, fintan quill, abby gruen, anton nikishaev inspiration: e.l. whitney[1920-1966] multiple putnam winner k.e. iverson[1920-2004] turing award winnerbenchmarkaboutk-sql is consistently 100 times faster (or more) than redshift, bigquery, snowflake, spark, mongodb, postgres, .. same data. same queries. same hardware. anyone can run the scripts. benchmarks: taq 1.1Trillion trades and quotes taxi 1.1Billion nyc taxi rides stac .. Taq 1.1T q1:select max price by sym,ex from trade where sym in S q2:select sum size by sym,time.hour from trade where sym in S q3:do(100)select last bid by sym from quote where sym in S / point select q4:select from trade[s],quote[s] where price<bid / asof join S is top 100 (10%) time(ms) 16core 100 days q1 q2 q3 q4 k 44 72 63 20 spark 80000 70000 DNF DNF - can't do it postgres 20000 80000 DNF DNF - can't do it .. Taxi 1.1B q1:select count by type from trips q2:select avg amount by pcount from trips q3:select count by year,pcount from trips q4:select count by year,pcount,_ distance from trips cpu cost core/ram elapsed machines k 4 .0004 4/16 1 1*i3.2xlarge(8v/32/$.62+$.93) redshift 864 .0900 108/1464 8(1 2 2 3) 6*ds2.8xlarge(36v/244/$6.80) bigquery 1600 .3200 200/3200 8(2 2 1 3) db/spark 1260 .0900 42/336 30(2 4 4 20) 21*m5.xlarge(4v/16/$.20+$.30) Stac ..taq.k/ nyse taq: 1trillion rows (50million trades 1billion quotes per day) e:{x%+/x:exp|6*(!x)%x} N:1|_.5+50e6*x:e m:6000 t:{[[]t:09:30:00+x?06:30:00;e:x?"ABCD";p:10+x?90.;z:100+x?900]} q:{[[]t:09:30:00+x?06:30:00;e:x?"ABCD";b:10+x?90.]} A:100#S:m?`4 T:S!t'N Q:S!q'N d:16 \t:d select sum p by e from T A \t:d select sum z by t.h from T A \t:d*100 {select last b from x}'Q A \ a:*A \t select from T a,Q a where p<b time(ms) 16core 100 days q1 q2 q3 q4 k 44 72 63 20 spark 80000 70000 DNF DNF postgres 20000 80000 DNF DNF .. /SAVE csv \t `t.csv 2:t \t `q.csv 2:q /SAVE LOAD csv \t t:`s=2:`t.csv \t q:`s=2:`q.csv \t `taq 2:(t;q) d:2017.01.01+m*!n g:{[[]v:x?2;p:x?9;m:x?100;a:x?2.3e]} t:d!g'n#380000 m*n*.38e6 \t:m select count by v from t \t:m select avg a by p from t \t:m select count by d.year,p from t \t:m select count by d.year,p,m from t \ /data curl -s > 2017.01 .. import`csv \t x:1:`2017.01 \t t:+`v`d`p`m`a!+csv["bd ii 2";x] \t t:`d grp t \t "t/"2:t 1.1billion taxi rides apples to apples (same data. same hardware. same queries.) k is 100 times faster than spark redshift snowflake bigquery .. select v,count(*)as n from t group by v select p,avg(a)as a from t group by p select extract(year from d)as year,p, count(*)as n from t group by year,p select extract(year from d)as year,p,m,count(*)as n from t group by year,p,m timings(aws i3.4xlarge) k sparky shifty flaky .0 12 19 20+ .3 18 15 30+ .1 20 33 50+ .5 103 36 60+ ---------------------- .9 153 103 160+ bottomline: sparky/shifty/flaky/googly are expensive-slow.documentman.dcustomer: hedgefund bank formula1 manufacturing .. database: hdb/trillion (read:billion/sec/core) rdb/billion (write:million/sec latency:us) 2.0 hdb rdb+log distributed-db iff: python nodejs ffi: csv json lz4 ztsd bench: taxi ta type: ncif[dt] select a by b from t where c a: count sum min max avg var b: cbgh[1 or 2] nidt t: t D!T c: t rdb/oltp/operational[log] T hdb/olap/analytical D!T proc: 96TB (3000 days of 32GB per day) 8TBlog 24TBscratch db can be distributed over procs and machines but don't bother unless you have >10GB per day. bigdata: date!.. or date!sym!.. write one file per day in directory. instant load. run production as 2 instances. express is extremely fast: sql query 10GB/sec(10 times faster than biqquery snowflake etc.) csv read/write 1GB/sec(10 times faster than bigquery snowflake etc.) enterprise 10 times faster(multi-thread) taxi: 10-100 1.1Billion query$cost shakti vs spark/postgres/redshift/bigquery/azure/.. taq: 100-inf 1.1Trillion query$cost shakti vs spark/postgres 2.1 n256/decimal stac/tpc-ds tickerplant notes mach: 8core per TB hot (e.g. aws i3en.4x) Abug(prod/run) no [eval err] Bbug(test/dev)sql.dshakti universal database includes: ansi-sql [1992..2011] ok for row/col select. real-sql [1974..2021] atw@ipsa does it better. join: real-easy ansi-ok real: select from T,U ansi: select from T left outer join U group: real-easy ansi-annoy real: select A by B from T ansi: select B, A from T group by B order by B simple: real-easy ansi-easy real: select A from T where C or D, E ansi: select A from T where (C or D)and E complex: real-easy ansi-awful asof/joins select from t,q where price<bid first/last select last bid from quote where sym=`A deltas/sums select from t where 0<deltas price foreignkeys select order.cust.nation.region .. arithmetic x+y e.g. combine markets through time example: TPC-H National Market Share Query 8 what market share does supplier.nation BRAZIL have by order.year for order.customer.nation.region AMERICA and part.type STEEL? real: select revenue avg supplier.nation=`BRAZIL by order.year from t where order.customer.nation.region=`AMERICA, part.type=`STEEL ansi: select o_year,sum(case when nation = 'BRAZIL' then revenue else 0 end) / sum(revenue) as mkt_share from ( select extract(year from o_orderdate) as o_year, revenue, n2.n_name as nation from t,part,supplier,orders,customer,nation n1,nation n2,region where p_partkey = l_partkey and s_suppkey = l_suppkey and l_orderkey = o_orderkey and o_custkey = c_custkey and c_nationkey = n1.n_nationkey and n1.n_regionkey = r_regionkey and r_name = 'AMERICA' and s_nationkey = n2.n_nationkey and o_orderdate between date '1995-01-01' and date '1996-12-31' and p_type = 'STEEL') as all_nations group by o_year order by o_year; Comparison: real ansi(sqlserver/oracle/db2/sap/teradata/..) install 1 second 100,000 second hardware 1 milliwatt 100,000 milliwatt software 160 kilobyte 8,000,000 kilobyte (+ 10,000,000kilobyte O/S) mediandb 1,000,000 megarow 10 megarow shakti is essential for analyzing big (trillion row+) and/or complex data.downloadexpress licenseli2.0 06.09you are accepting terms of eula by downloading li2.0mi2.0 06.07you are accepting terms of eula by downloading mi2.0enterpriseLi2.0 06.09you are accepting terms of eula by downloading Li2.0Mi2.0 06.07you are accepting terms of eula by downloading Mi2.0ffiaboutimport`csv csv["|tcnif";1:":ffi/t.csv"] import`json json"[3.14]"t.csvt|e|s|v|p 071609644084992|P|AAPL|17|115