Hi All,
One of my client having 1 million(nearly) records in a table.
I defined the table as below
1) Created table with one col(we can name it as "ID") having IDENTITY
2) Using "alter table", I created CLUSTERED PRIMARY KEY Constraint on Same field (ID)
3) The Primary key having 2 ref with another 2 tables
Now the issue is when we create or define a primary key (With Clustered Option) automatically cluster Index will be created on defined table
As such table having huge data whenever any updation or insertion against that particular table taking huge amount of time, because the cluster Index trying re-paging whole data. Because of re-paging each and every time "Transaction Log also growing in huge" (database is in full recovery mode and client wants in same mode only)
Data partitioning not posible because whole data related and current live data
I tried following options with vain
1) To Clear transaction log I suggested to take regular log backup's
2) I tried to drop cluster index and tried to implement non clustered index
Drop and re-create index is take taking huge amount of time
Even in this process I have to Re_Index remaining Index's also
Pls give me any other solution or suggestion in this regard
with Thanks & Regards
BhaskaraFirst suggestion: pick a column name more descriptive than "ID"....
Second suggestion: one million rows is not that much data. Could there be some other reason why inserts are taking so long?
Third suggestion: you can reduce the number of initial page splits setting the fill factor lower when you create the index.|||If the clustered index is an identity then you are not getting page splits unless you are making changes to data attribute columns and increasing the size of this data. On inserts you are certainly not getting page splits - monotonically increasing clustered indexes like this will not create splits and are the fastest possible index for inserts (faster even than no clustered index).
Hang on - I just reread - are you changing the clustered index constantly?|||There appears to be more than one problem here, so it may take several steps to correct all of the problems.
First and foremost, you specified in your ALTER TABLE that you wanted the index to be clustered, so SQL Server dutifully did what you asked. That is not part of your problem at all.
A million rows is not a large table for SQL Server. A billion rows might be large, but a million definitely is not.
SQL Server disk I/O is what drives log usage. The log file is essentially a record of what changes were made to the database at the disk (binary) level.
Dropping a clustered index shouldn't be expensive in terms of time, but the server should not permit you to drop an index that is placed by the server to protect DRI (Declarative Referential Integrity). It shouldn't take a long time, but the DROP INDEX ought to fail.
When you create a new clustered index, it can cause massive amounts of I/O operations because a clustered index effectively rearranges the data store for the entire table. That means that every row in the table can (and probably will) move, and that every existing index on the table will need to be rebuilt.
When you change a table with a clustered index (using INSERT, UPDATE, or DELETE) the amount of I/O should be nearly the same as updating the same table without the clustered index. It is possible that you might have a page split that a heap insert would not incur, but the grand total of the I/O shouldn't be significantly different.
Whatever is causing the change in disk I/O and log file usage is almost certainly not the clustered index alone. Something else is either part or all of this problem.
To answer your questions as you posted them:
1) Take regular incremental backups to allow log space to be reused. This should not affect the time needed, but it will reduce the log file growth.
2) Dropping the clustered index should be impossible using DROP INDEX. I think you'll need to use ALTER TABLE to make this happen. Dropping the index ought to be quick, although creating a new index may take a while.
-PatP|||Dropping a clustered index shouldn't be expensive in terms of time
Unless there are other indexes on the table.|||are there any triggers on this table?|||If the clustered index is an identity then you are not getting page splits unless you are making changes to data attribute columns and increasing the size of this data. On inserts you are certainly not getting page splits - monotonically increasing clustered indexes like this will not create splits and are the fastest possible index for inserts (faster even than no clustered index).
Hang on - I just reread - are you changing the clustered index constantly?
Thanks for your interest in my problem
When I observed in Sql profiler, if there is any Insertion or Updation, it is trying re-page(indexing Pages) whole
come to last point made by you
Intially I declared it as clustered index, now because of slow performance i wanted it to drop and create NON-CLUSTER index|||Hi Pat Phelan,
Thanks for your interest in my problem
I Got what exactly you wanted to say
still I have some clarifications
1) I try to drop the clustered index using "ALTER TABLE" after removing DRI
and then I recreate the NONCLUSTERED INDEX using
"ALTER Table Document ADD CONSTRAINT PK_Document_id PRIMARY KEY NONCLUSTERED (id) ON [PRIMARY] "
whether it will give any impact on performance|||OK, you're dropping the constraint, not the index
Do you know what clustering means?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment