shakti

aboutwho - hedgefunds banks formula1 manufacturing retail genomics what - universal database relationalDB timeseriesDB arrayDB documentDB objectDB graphDB. oltp/rdb/operational +log non-stop multi-thread 8TB log 24TB scratch per process. olap/hdb/analytical date non-stop multi-thread distributed 96TB per process. why - 100 times faster(& more analysis) than redshift snowflake spark bigquery .. when - now where - everywhere linux/macos/.. intel/amd/arm/.. arthur whitney, janet lustgarten, fintan quill, abby gruen, anton nikishaev, .. inspired by e.l. whitney[1920-1966] multiple putnam winner k.e. iverson[1920-2004] apl/turing award winner compare vs shifty flaky sparky cloudy .. li2.0(express) 10 times faster Li2.0(enterprise) 100 times faster taxi: 1.1Billion 10-100 times faster/queriesper$ taq: 1.1Trillion 100-INF times faster/queriesper$ 2.0 sql: select A by B from T where C ffi: csv json lz4 ztsd iff: python nodejs k('select ..') next 2.1 n256/decimal tickerplant 2.2 sql2011: tpc-ds .. python4: python + pandas + numpy benchmarkaboutk-sql is consistently 100 times faster (or more) than redshift, bigquery, snowflake, spark, mongodb, postgres, .. same data. same queries. same hardware. anyone can run the scripts. benchmarks: taq 1.1Trillion trades and quotes taxi 1.1Billion nyc taxi rides stac .. Taq 1.1T https://www.nyse.com/publicdocs/nyse/data/Daily_TAQ_Client_Spec_v2.2a.pdf q1:select max price by sym,ex from trade where sym in S q2:select sum size by sym,time.hour from trade where sym in S q3:do(100)select last bid by sym from quote where sym in S / point select q4:select from trade[s],quote[s] where price<bid / asof join S is top 100 (10%) time(ms) 16core 100 days q1 q2 q3 q4 k 44 72 63 20 spark 80000 70000 DNF DNF - can't do it postgres 20000 80000 DNF DNF - can't do it .. Taxi 1.1B https://tech.marksblogg.com/benchmarks.html q1:select count by type from trips q2:select avg amount by pcount from trips q3:select count by year,pcount from trips q4:select count by year,pcount,_ distance from trips cpu cost core/ram elapsed machines k 4 .0004 4/16 1 1*i3.2xlarge(8v/32/$.62+$.93) redshift 864 .0900 108/1464 8(1 2 2 3) 6*ds2.8xlarge(36v/244/$6.80) bigquery 1600 .3200 200/3200 8(2 2 1 3) db/spark 1260 .0900 42/336 30(2 4 4 20) 21*m5.xlarge(4v/16/$.20+$.30) Stac https://www.stacresearch.com/ ..taq.k/ nyse taq: 1trillion rows (50million trades 1billion quotes per day) e:{x%+/x:exp|6*(!x)%x} N:1|_.5+50e6*x:e m:6000 t:{[[]t:09:30:00+x?06:30:00;e:x?"ABCD";p:10+x?90.;z:100+x?900]} q:{[[]t:09:30:00+x?06:30:00;e:x?"ABCD";b:10+x?90.]} A:100#S:m?`4 T:S!t'N Q:S!q'N d:16 \t:d select sum p by e from T A \t:d select sum z by t.h from T A \t:d*100 {select last b from x}'Q A \ a:*A \t select from T a,Q a where p<b time(ms) 16core 100 days q1 q2 q3 q4 k 44 72 63 20 spark 80000 70000 DNF DNF postgres 20000 80000 DNF DNF .. /SAVE csv \t `t.csv 2:t \t `q.csv 2:q /SAVE LOAD csv \t t:`s=2:`t.csv \t q:`s=2:`q.csv \t `taq 2:(t;q)taxi.km:_2922%n:16 d:2017.01.01+m*!n g:{[[]v:x?2;p:x?9;m:x?100;a:x?2.3e]} t:d!g'n#380000 m*n*.38e6 \t:m select count by v from t \t:m select avg a by p from t \t:m select count by d.year,p from t \t:m select count by d.year,p,m from t \ /data curl -s https://s3.amazonaws.com/nyc-tlc/trip+data/yellow_tripdata_2017-01.csv > 2017.01 .. import`csv \t x:1:`2017.01 \t t:+`v`d`p`m`a!+csv["bd ii 2";x] \t t:`d grp t \t "t/"2:t 1.1billion taxi rides https://tech.marksblogg.com/benchmarks.html apples to apples (same data. same hardware. same queries.) k is 100 times faster than spark redshift snowflake bigquery .. select v,count(*)as n from t group by v select p,avg(a)as a from t group by p select extract(year from d)as year,p, count(*)as n from t group by year,p select extract(year from d)as year,p,m,count(*)as n from t group by year,p,m timings(aws i3.4xlarge) k sparky shifty flaky .0 12 19 20+ .3 18 15 30+ .1 20 33 50+ .5 103 36 60+ ---------------------- .9 153 103 160+ https://tech.marksblogg.com/billion-nyc-taxi-rides-spark-2-4-versus-presto-214.html https://tech.marksblogg.com/billion-nyc-taxi-rides-redshift-large-cluster.htm https://blog.tropos.io/analyzing-2-billion-taxi-rides-in-snowflake-a1fbed8b7ba3 bottomline: sparky/shifty/flaky/googly are expensive-slow.documentk.d$curl -o k.so shakti.com/python/k.so python: import k;k.k('2+3') #nodejs: require('k').k('2+3') $k [-p 1024] [a.k] t:[[]t:09:30:00.000+!2;e:"b";s:`aa;v:2;p:2.3e] `csv?`csv t;`json?`json t `lz4?`lz4 t;`zstd?`zstd t verb adverb noun \l a.k : x y f' each char " ab" \t:n x + flip plus [x]f/ over c/ join name ``ab \u:n x - minus minus [x]f\ scan c\ split int 2 3 \v * first times [y]f':eachprior flt 2 3.4 \w % divide f/:eachright g/:over date 2021.06.28 .z.d & where min/and f\:eachleft g\:scan time 12:34:56.789 .z.t | reverse max/or < asc less i/o (*enterprise) class > desc more 0: r/w line list (2;3.4;`c) = group equal 1: r/w char dict [n:`b;i:2] ~ not match *2: r/w data func {[a;b]a+b} ! key key *3: k-ipc set expr :a+b , enlist cat *4: https get ^ sort [f]cut 5: ffi/import # count [f]take _ floor [f]drop $ string parse $[b;t;f] cond ? unique find/rand @ type [f]at @[x;i;f[;y]] amend table [[]n:`b`c;i:2 3] . value [f]dot .[x;i;f[;y]] dmend utable [[n:`b`c]i:2 3] count first last min max sum avg var dev [med ..] select A by B from T where C; update A from T; delete from T where C sqrt sqr exp log sin cos div mod bar in bin /comment \trace [:return 'signal if do while]sql.dshakti universal database includes: ansi-sql [1992..2011] ok for row/col select. real-sql [1974..2021] atw@ipsa does it better. join: real-easy ansi-ok real: select from T,U ansi: select from T left outer join U group: real-easy ansi-annoy real: select A by B from T ansi: select B, A from T group by B order by B simple: real-easy ansi-easy real: select A from T where C or D, E ansi: select A from T where (C or D)and E complex: real-easy ansi-awful asof/joins select from t,q where price<bid first/last select last bid from quote where sym=`A deltas/sums select from t where 0<deltas price foreignkeys select order.cust.nation.region .. arithmetic x+y e.g. combine markets through time example: TPC-H National Market Share Query 8 http://www.qdpma.com/tpch/TPCH100_Query_plans.html what market share does supplier.nation BRAZIL have by order.year for order.customer.nation.region AMERICA and part.type STEEL? real: select revenue avg supplier.nation=`BRAZIL by order.year from t where order.customer.nation.region=`AMERICA, part.type=`STEEL ansi: select o_year,sum(case when nation = 'BRAZIL' then revenue else 0 end) / sum(revenue) as mkt_share from ( select extract(year from o_orderdate) as o_year, revenue, n2.n_name as nation from t,part,supplier,orders,customer,nation n1,nation n2,region where p_partkey = l_partkey and s_suppkey = l_suppkey and l_orderkey = o_orderkey and o_custkey = c_custkey and c_nationkey = n1.n_nationkey and n1.n_regionkey = r_regionkey and r_name = 'AMERICA' and s_nationkey = n2.n_nationkey and o_orderdate between date '1995-01-01' and date '1996-12-31' and p_type = 'STEEL') as all_nations group by o_year order by o_year; Comparison: real ansi(sqlserver/oracle/db2/sap/teradata/..) install 1 second 100,000 second hardware 1 milliwatt 100,000 milliwatt software 160 kilobyte 8,000,000 kilobyte (+ 10,000,000kilobyte O/S) mediandb 1,000,000 megarow 10 megarow https://docs.microsoft.com/en-us/sql/database-engine/install-windows/install-sql-server?view=sql-server-ver15 shakti is essential for analyzing big (trillion row+) and/or complex data.downloadexpress licenseli2.0 07.26you are accepting terms of eula by downloading li2.0mi2.0 07.26you are accepting terms of eula by downloading mi2.0enterpriseLi2.0 07.26you are accepting terms of eula by downloading Li2.0Mi2.0 07.26you are accepting terms of eula by downloading Mi2.0ffiaboutjson.so json.dylib use simdjson https://github.com/simdjson/simdjson/blob/master/LICENSE lz4.so lz4.dylib use lz4 https://github.com/lz4/lz4/blob/dev/lib/LICENSE zstd.so zstd.dylib use zstd https://github.com/facebook/zstd/blob/dev/LICENSEb.csvt|e|s|v|p 071609644|P|AAPL|17|115.5csv.dylibcsv.sojson.dylibjson.solz4.dyliblz4.sozstd.dylibzstd.sonodejsk.nodem.nodepythonk.som.so