sql - What is the correct way to structure a high volume log record -
i have requirement log application events in sql 2012 database. basic record structure requirement pretty simple:
create table [dbo].[eventlog] ( [processid] int not null, [applicationid] int not null, [created] datetime not null, constraint [pk_eventlog] primary key clustered ([processid],[applicaionid],[created] asc) )
the problem having 1 of performance. 1 million events per day can generated , number of rows increase, insert performance diminishing - point logger not able keep events.
i writing batches of logs out intermediary plain text files , processing these files using service running separately main application logger.
i suspect culprit may maintaining index , advice on how can approach problem more efficiently/effectively.
any advice appreciated.
the main cause of performance problem choice of columns forming clustered index.
in clustered index, data stored in leaf-level pages of index, in order defined index key columns. hence, in table, data stored in order processid, applicationid, created
.
without seeing data, assume log entries being created time passes variety of processid
s , applicationid
s. if case, every insert, sql inserting each log entry @ appropriate point in middle of log table. more time-consuming sql server inserting records @ end of table. also, when inserted record cannot fit on appropriate page, page split occur result in clustered index being fragmented - decrease performance further.
ideally, should aim have clustering key small possible while being unique. therefore 1 approach create new id column identity , create clustered index on that. example:
create table [dbo].[eventlog] ( [eventlogid] int identity(1,1), [processid] int not null, [applicationid] int not null, [created] datetime not null, constraint [pk_eventlog] primary key clustered ([eventlogid]) )
Comments
Post a Comment