aboutuniversal database + language for: hedgefunds banks manufacturers iot formula1 .. why: hundred times faster than biqquery redshift databricks snowflake .. who: arthur whitney (+ janetlustgarten fintanquill abbygruen antonnikishaev) thanks to e.l. whitney[1920-1966] multiple putnam winner k.e. iverson[1920-2004] APL turing award'79 [advisor] john cocke [1925-2002] RISC turing award'87 [advisor] compare vs biggie shifty sparky flaky .. taxi: 1.1Billion 10-100 times faster/QP$ taq: 1.1Trillion 100-INF times faster/QP$benchmarkaboutk-sql is consistently 100 times faster (or more) than redshift, bigquery, snowflake, spark, mongodb, postgres, .. same data. same queries. same hardware. anyone can run the scripts. benchmarks: taq 1.1Trillion trades and quotes taxi 1.1Billion nyc taxi rides stac .. Taq 1.1T q1:select max price by sym,ex from trade where sym in S q2:select sum size by sym,time.hour from trade where sym in S q3:do(100)select last bid by sym from quote where sym in S / point select q4:select from trade[s],quote[s] where price<bid / asof join S is top 100 (10%) time(ms) 16core 100 days q1 q2 q3 q4 k 44 72 63 20 spark 80000 70000 DNF DNF - can't do it postgres 20000 80000 DNF DNF - can't do it .. Taxi 1.1B q1:select count by type from trips q2:select avg amount by pcount from trips q3:select count by year,pcount from trips q4:select count by year,pcount,_ distance from trips cpu cost core/ram elapsed machines k 4 .0004 4/16 1 1*i3.2xlarge(8v/32/$.62+$.93) redshift 864 .0900 108/1464 8(1 2 2 3) 6*ds2.8xlarge(36v/244/$6.80) bigquery 1600 .3200 200/3200 8(2 2 1 3) db/spark 1260 .0900 42/336 30(2 4 4 20) 21*m5.xlarge(4v/16/$.20+$.30) Stac ..taq.k/ nyse taq: 1trillion rows (50million trades 1billion quotes per day) e:{x%+/x:exp|6*(!x)%x} N:1|_.5+50e6*x:e m:6000 t:{[[]t:09:30:00+x?06:30:00;e:x?"ABCD";p:10+x?90.;z:100+x?900]} q:{[[]t:09:30:00+x?06:30:00;e:x?"ABCD";b:10+x?90.]} A:100#S:m?`4 T:S!t'N Q:S!q'N d:16 \t:d select sum p by e from T A \t:d select sum z by t.h from T A \t:d*100 {select last b from x}'Q A \ a:*A \t select from T a,Q a where p<b time(ms) 16core 100 days q1 q2 q3 q4 k 44 72 63 20 spark 80000 70000 DNF DNF postgres 20000 80000 DNF DNF .. /SAVE csv \t `t.csv 2:t \t `q.csv 2:q /SAVE LOAD csv \t t:`s=2:`t.csv \t q:`s=2:`q.csv \t `taq 2:(t;q) d:2017.01.01+m*!n g:{[[]v:x?2;p:x?9;m:x?100;a:x?2.3e]} t:d!g'n#380000 m*n*.38e6 \t:m select count by v from t \t:m select avg a by p from t \t:m select count by d.year,p from t \t:m select count by d.year,p,m from t \ /data curl -s > 2017.01 .. import`csv \t x:1:`2017.01 \t t:+`v`d`p`m`a!+csv["bd ii 2";x] \t t:`d grp t \t "t/"2:t 1.1billion taxi rides apples to apples (same data. same hardware. same queries.) k is 100 times faster than spark redshift snowflake bigquery .. select v,count(*)as n from t group by v select p,avg(a)as a from t group by p select extract(year from d)as year,p, count(*)as n from t group by year,p select extract(year from d)as year,p,m,count(*)as n from t group by year,p,m timings(aws i3.4xlarge) k sparky shifty flaky .0 12 19 20+ .3 18 15 30+ .1 20 33 50+ .5 103 36 60+ ---------------------- .9 153 103 160+ bottomline: sparky/shifty/flaky/googly are expensive-slow.documentk.dpython: import k;k.k('2+3') nodejs: require('k').k('2+3') $k [-n4 -p1024] a.k t:[[]t:09:30:00.000 09:30:00.001;e:"b";s:`aa`aa;v:2 3;p:2.3 3.4] x:"t,e,s,v,p\n09:30:00.000,b,aa,2,2.3\n09:30:00.001,b,aa,3,3.4\n" `csv?`csv t /also `json?`json t `lz4?`lz4 x /also `zstd?`zstd x \\ verb adverb noun \l a.k : x y f' each char " ab" \t:n x + flip plus [x]f/ over c/ join name ``ab \u:n x - minus minus [x]f\ scan c\ split int 2 3 \v * first times [y]f':eachprior flt 2 3.4 \w % divide f/:eachright g/:over date 2021.06.28 .z.d & where min/and f\:eachleft g\:scan time 12:34:56.789 .z.t | reverse max/or < asc less i/o (*enterprise) class > desc more 0: r/w line list (2;3.4;`c) = group equal 1: r/w char dict [n:`b;i:2] ~ not match *2: r/w data func {[a;b]a+b} ! key key *3: k-ipc set expr :a+b , enlist cat *4: https get ^ sort [f]cut 5: ffi/import # count [f]take _ floor [f]drop $ string parse $[b;t;f] cond ? unique find/rand @ type [f]at @[x;i;f[;y]] amend table [[]n:`b`c;i:2 3] . value [f]dot .[x;i;f[;y]] dmend utable [[n:`b`c]i:2 3] count first last min max sum avg var dev [med ..] select A by B from T where C; update A from T; delete from T where C sqrt sqr exp log sin cos div mod bar in bin /comment \trace [:return 'signal if do while]sql.dshakti universal database includes: ansi-sql [1992..2011] ok for row/col select. real-sql [1974..2021] atw@ipsa does it better. join: real-easy ansi-ok real: select from T,U ansi: select from T left outer join U group: real-easy ansi-annoy real: select A by B from T ansi: select B, A from T group by B order by B simple: real-easy ansi-easy real: select A from T where C or D, E ansi: select A from T where (C or D)and E complex: real-easy ansi-awful asof/joins select from t,q where price<bid first/last select last bid from quote where sym=`A deltas/sums select from t where 0<deltas price foreignkeys select order.cust.nation.region .. arithmetic x+y e.g. combine markets through time example: TPC-H National Market Share Query 8 what market share does supplier.nation BRAZIL have by order.year for order.customer.nation.region AMERICA and part.type STEEL? real: select revenue avg supplier.nation=`BRAZIL by order.year from t where order.customer.nation.region=`AMERICA, part.type=`STEEL ansi: select o_year,sum(case when nation = 'BRAZIL' then revenue else 0 end) / sum(revenue) as mkt_share from ( select extract(year from o_orderdate) as o_year, revenue, n2.n_name as nation from t,part,supplier,orders,customer,nation n1,nation n2,region where p_partkey = l_partkey and s_suppkey = l_suppkey and l_orderkey = o_orderkey and o_custkey = c_custkey and c_nationkey = n1.n_nationkey and n1.n_regionkey = r_regionkey and r_name = 'AMERICA' and s_nationkey = n2.n_nationkey and o_orderdate between date '1995-01-01' and date '1996-12-31' and p_type = 'STEEL') as all_nations group by o_year order by o_year; Comparison: real ansi(sqlserver/oracle/db2/sap/teradata/..) install 1 second 100,000 second hardware 1 milliwatt 100,000 milliwatt software 160 kilobyte 8,000,000 kilobyte (+ 10,000,000kilobyte O/S) mediandb 1,000,000 megarow 10 megarow shakti is essential for analyzing big (trillion row+) and/or complex data.downloadexpress licenseli2.0 10.13you are accepting terms of eula by downloading li2.0mi2.0 10.07you are accepting terms of eula by downloading mi2.0enterpriseLi2.0 10.13you are accepting terms of eula by downloading Li2.0Mi2.0 10.07you are accepting terms of eula by downloading json.dylib use simdjson lz4.dylib use lz4 zstd.dylib use zstd|e|s|v|p 071609644|P|AAPL|17|