sql - PostgreSQL simple query optimization -
postgresql 8.4; 3 tables - store (~100k, pk id, fk supplier_id & item_id), supplier(~10 pk supplier_id), item(~1000 pk item_id);
i created following query data need:
select store.quantity, store.price, x.supplier_name store natural join (select * item natural join supplier) x store.price > 500 , store.quantity > 0 , store.quantity < 100 , x.item_name = 'somename';
the query plan:
nested loop (cost=20.76..6513.55 rows=8 width=229) -> hash join (cost=20.76..6511.30 rows=8 width=15) hash cond: (store.item_id = item.item_id) -> seq scan on store (cost=0.00..6459.00 rows=8388 width=23) filter: ((price > 500::numeric) , (quantity > 0) , (quantity < 100)) -> hash (cost=20.75..20.75 rows=1 width=8) -> seq scan on item (cost=0.00..20.75 rows=1 width=8) filter: ((item_name)::text = 'somename'::text) -> index scan using supplier_pkey on supplier (cost=0.00..0.27 rows=1 width=222) index cond: (supplier.supplier_id = store.supplier_id)
now aim reduce cost more 30% optimizing query itself. instances of problem found solved modifying table or server settings, looking modifying nothing else query , that's fell short in research.
clearly issue solved seq scan, brings me thinking need arrange scanning/filtering applied subset of store table - iirc need scan table in such case, maybe use else seq scan? index scan isn't going since wouldn't filtering index... i'm puzzled here because seems more of choice postgresql optimizer makes , not can change @ will...
(if you're wondering, part of assignment , i'm asking here because have spent quite few hours researching problem failing find relevant, , gave on it, i'm still curious...)
you can fix indexes. little hard tell keys because of "natural join"s. (i recommend using
instead of natural join
can @ least see keys being used , if 1 of tables modified, won't mess join.)
i think index on item(item_name, item_id)
query plan.
Comments
Post a Comment