We have few tables in our application. Basically they are used as temporary
tables. Each day they start with no records in it(empty) and as the day goes
they are filled with the data and at the end of the day they will be
truncated to get ready for the next day. We have a high transaction rate
about 4000/sec. When I noticed the page splits /sec counter it is showing
about 130-160 per second. This is driving the checkpoint to take longer time
.
How Can I reduce this high page splits.
Another question is, we have a char(15) column in those tables and that
column is indexed. It is an Id column but is not unique. Each record has an
unique number Id(generated by our app). But we need to seacrh on the CHAR Id
column so indexed on it. This index is creating/making lot of dirty pages.
This also is a contributing reason for the checkpoint to take longer. How ca
n
I make changes to the index so that it would not create/make many pages
dirty? I tried to change it to VARCHAR and there is not much difference.
The check point is taking about 10-15 seconds and it repeats every 60 second
s.
Your suggestion is greatly appreciated.
Thanks.
Thanks.Just give a try with the following info
1. Check the recovery interval option on the system.
2. make sure that the temporary tables have fixed size by using char
rather than
varchar therefore you can reduce the page spilts.
HTH
Regards
Rajesh Peddireddy.
"Srini" wrote:
> We have few tables in our application. Basically they are used as temporar
y
> tables. Each day they start with no records in it(empty) and as the day go
es
> they are filled with the data and at the end of the day they will be
> truncated to get ready for the next day. We have a high transaction rate
> about 4000/sec. When I noticed the page splits /sec counter it is showing
> about 130-160 per second. This is driving the checkpoint to take longer ti
me.
> How Can I reduce this high page splits.
> Another question is, we have a char(15) column in those tables and that
> column is indexed. It is an Id column but is not unique. Each record has a
n
> unique number Id(generated by our app). But we need to seacrh on the CHAR
Id
> column so indexed on it. This index is creating/making lot of dirty pages.
> This also is a contributing reason for the checkpoint to take longer. How
can
> I make changes to the index so that it would not create/make many pages
> dirty? I tried to change it to VARCHAR and there is not much difference.
> The check point is taking about 10-15 seconds and it repeats every 60 seco
nds.
> Your suggestion is greatly appreciated.
> Thanks.
> Thanks.|||It would really help to show the entire DDL for the table including the
indexes. It sounds like your disk subsystem isn't up to the task. If you
are going to have that many transactions you need a fast disk I/O subsystem,
especially for the transaction logs. Is the log file on it's own RAID 1 or
RAID 10 and is the data on a RAID 10?
Andrew J. Kelly SQL MVP
"Srini" <Srini@.discussions.microsoft.com> wrote in message
news:1F6FF490-C19A-4DFE-BDBC-3AD397CC9D5A@.microsoft.com...
> We have few tables in our application. Basically they are used as
> temporary
> tables. Each day they start with no records in it(empty) and as the day
> goes
> they are filled with the data and at the end of the day they will be
> truncated to get ready for the next day. We have a high transaction rate
> about 4000/sec. When I noticed the page splits /sec counter it is showing
> about 130-160 per second. This is driving the checkpoint to take longer
> time.
> How Can I reduce this high page splits.
> Another question is, we have a char(15) column in those tables and that
> column is indexed. It is an Id column but is not unique. Each record has
> an
> unique number Id(generated by our app). But we need to seacrh on the CHAR
> Id
> column so indexed on it. This index is creating/making lot of dirty pages.
> This also is a contributing reason for the checkpoint to take longer. How
> can
> I make changes to the index so that it would not create/make many pages
> dirty? I tried to change it to VARCHAR and there is not much difference.
> The check point is taking about 10-15 seconds and it repeats every 60
> seconds.
> Your suggestion is greatly appreciated.
> Thanks.
> Thanks.|||We have SAN disk system. Probably the HW is good enough, just trying to see
if I can rearrange some things on the database front to make some improvemen
t.
Coming to the DDL, the tables are not temporary tables the data is temporary
in the sense that the data is kept only for the current day and at the end o
f
the day they are truncated. Each table has about 20 columns. some are decima
l
fields some are datatime columns and others are integer and char type which
includes many char(1)'s and two/three columns char(15 to 20)). One of the
char(15) is indexed which is a kind of Id but is not unique there are no
relationships between these tables and other tables. No triggers no views an
d
anything as such. The data comes into to the system gets inserted to these
standalone tables using some stored procedures. And these tables are queried
using some other stored procedures. The major problem to me looks like is
because of the page splits that it is generating and the dirty pages that it
is generating(about 10000 dirty pages). Is there any thing that can be done
on the table or anything else to make things perform better?
Thanks in advance for your suggestion.
"Andrew J. Kelly" wrote:
> It would really help to show the entire DDL for the table including the
> indexes. It sounds like your disk subsystem isn't up to the task. If you
> are going to have that many transactions you need a fast disk I/O subsyste
m,
> especially for the transaction logs. Is the log file on it's own RAID 1 o
r
> RAID 10 and is the data on a RAID 10?
> --
> Andrew J. Kelly SQL MVP
>
> "Srini" <Srini@.discussions.microsoft.com> wrote in message
> news:1F6FF490-C19A-4DFE-BDBC-3AD397CC9D5A@.microsoft.com...
>
>|||Srini wrote:
> We have SAN disk system. Probably the HW is good enough, just trying
> to see if I can rearrange some things on the database front to make
> some improvement.
> Coming to the DDL, the tables are not temporary tables the data is
> temporary in the sense that the data is kept only for the current day
> and at the end of the day they are truncated. Each table has about 20
> columns. some are decimal fields some are datatime columns and others
> are integer and char type which includes many char(1)'s and two/three
> columns char(15 to 20)). One of the char(15) is indexed which is a
> kind of Id but is not unique there are no relationships between these
> tables and other tables. No triggers no views and anything as such.
> The data comes into to the system gets inserted to these standalone
> tables using some stored procedures. And these tables are queried
> using some other stored procedures. The major problem to me looks
> like is because of the page splits that it is generating and the
> dirty pages that it is generating(about 10000 dirty pages). Is there
> any thing that can be done on the table or anything else to make
> things perform better?
>
The reason it would help to see the DDL is because page splits are a
result of a clustered index and inserts that are not in clustered index
order. You can eliminate the page splitting by chaning the clustered
index to non-clustered or inserting the data in clustered index key
order (if that's possible).
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Just because it is a SAN does not mean it is sufficient or configured
properly for your application. I run across more issues SAN related simply
because people tend to ignore the configuration in thinking it can handle
what ever they need. The DDL was to see what we are dealing with and leave
nothing to imagination. It only takes a second to script the table and
indexes but it goes a long way towards letting us see what is actually
there. Not just what you may think is relevant. This is especially true for
the indexes.
Andrew J. Kelly SQL MVP
"Srini" <Srini@.discussions.microsoft.com> wrote in message
news:623BAACD-7754-4BF4-B9ED-5AA2EDC16F8B@.microsoft.com...
> We have SAN disk system. Probably the HW is good enough, just trying to
> see
> if I can rearrange some things on the database front to make some
> improvement.
> Coming to the DDL, the tables are not temporary tables the data is
> temporary
> in the sense that the data is kept only for the current day and at the end
> of
> the day they are truncated. Each table has about 20 columns. some are
> decimal
> fields some are datatime columns and others are integer and char type
> which
> includes many char(1)'s and two/three columns char(15 to 20)). One of the
> char(15) is indexed which is a kind of Id but is not unique there are no
> relationships between these tables and other tables. No triggers no views
> and
> anything as such. The data comes into to the system gets inserted to these
> standalone tables using some stored procedures. And these tables are
> queried
> using some other stored procedures. The major problem to me looks like is
> because of the page splits that it is generating and the dirty pages that
> it
> is generating(about 10000 dirty pages). Is there any thing that can be
> done
> on the table or anything else to make things perform better?
> Thanks in advance for your suggestion.
> "Andrew J. Kelly" wrote:
>|||David is correct but I just want to caution that changing the clustered
index to a nonclustered will not remove page splits. It may reduce them but
a nonclustered index is implemented just like a clustered index and can page
split as well.
Andrew J. Kelly SQL MVP
"David Gugick" <david.gugick-nospam@.quest.com> wrote in message
news:eC3cUzslFHA.3316@.TK2MSFTNGP14.phx.gbl...
> Srini wrote:
> The reason it would help to see the DDL is because page splits are a
> result of a clustered index and inserts that are not in clustered index
> order. You can eliminate the page splitting by chaning the clustered index
> to non-clustered or inserting the data in clustered index key order (if
> that's possible).
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com|||If we try to insert the data in clustered index order, which sounds to me
like a monotonically increasing clustered index, would n't it create hot
spots on the disk there by reducing the throughput? Currently we can't
control the data insert order. But I think I can change it so that I can
create clustered index on the serial number which is sequential and that is
generated by me so the data insertions will be in the clustered index order.
How about the dirty pages created by the other non clustered indexes? When I
run DBCC MEMUSAGE it is showing lot of dirty pages on the pages related to
the non-clustered indexes. How can I reduce the dirty pages on those? I
understand FILLFACTOR will not help here as that option is useful when there
is some data in the table and we are creating indexes on that table.
Thanks.
"David Gugick" wrote:
> Srini wrote:
> The reason it would help to see the DDL is because page splits are a
> result of a clustered index and inserts that are not in clustered index
> order. You can eliminate the page splitting by chaning the clustered
> index to non-clustered or inserting the data in clustered index key
> order (if that's possible).
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>|||Is there any limit on the number of repplies that one can post in a time
frame... This thing is not letting me post mine. Trying to POST again...
I agree SAN may have some issues we are trying to check on that. But by just
looking at the SQL server front 10000 dirty pages per checkpoint, using
default recovery interval(which is 60 seconds), looks like something can be
done there to reduce that huge number of dirty pages.
DDL looks like this:
SET ANSI_PADDING ON
GO
CREATE TABLE
[dbo].[DAILY_DATA1](
[Event_id] [char] (18) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
--This is unique identifier for the data, currently clustered index is
created on this field
[Category_cd] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Type_cd] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Session_id] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Serial_nb] [int] NOT NULL , App generated serial number, which I can use to
create clustered index
[Order_ts] [datetime] NOT NULL ,
[Data_Id_tx] [char] (14) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
--This is the column that we have index on
.
.
.
[Description_tx] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
.
.
.
[Receipt_ts] [datetime] NOT NULL
CONSTRAINT [PK_DD_1_Evt_ID] PRIMARY KEY CLUSTERED
(
[Event_id]
) ON [PRIMARY]
) ON [PRIMARY]
END
GO
CREATE INDEX [IDX_DD_1_Data_Id_tx] ON [dbo].[Daily_Data1]([Data_Id_tx])
WITH FILLFACTOR = 90 ON [PRIMARY]
GO
Thanks.
"Andrew J. Kelly" wrote:
> Just because it is a SAN does not mean it is sufficient or configured
> properly for your application. I run across more issues SAN related simply
> because people tend to ignore the configuration in thinking it can handle
> what ever they need. The DDL was to see what we are dealing with and leav
e
> nothing to imagination. It only takes a second to script the table and
> indexes but it goes a long way towards letting us see what is actually
> there. Not just what you may think is relevant. This is especially true f
or
> the indexes.
> --
> Andrew J. Kelly SQL MVP
>
> "Srini" <Srini@.discussions.microsoft.com> wrote in message
> news:623BAACD-7754-4BF4-B9ED-5AA2EDC16F8B@.microsoft.com...
>
>|||I repplied to thsi in the morning but it did not get posted...
Interval option is set to default - not changed. Default is 60 seconds. If I
increase it, it is taking way long on the checkpoint. If I decrease it
CHCKPOINT occurs too frequently. Both are problematic.
The tables are not temporary but the data is. Data gets inserted as part of
the daily operations and will be truncated in the evening. All the columns
are set to fixed length CHAR fields(to their maximum possible lenghts). Ther
e
are only two/thress columns with CHAR(15), CHAR(18) and CHAR(10) all other
columns are integer, decimal, datetime, CHAR(1) type.
I need to find a way to reduce the number of dirty pages and the number of
page splits. How can I do that?
Thanks.
"Rajesh" wrote:
> Just give a try with the following info
> 1. Check the recovery interval option on the system.
> 2. make sure that the temporary tables have fixed size by using char
> rather than
> varchar therefore you can reduce the page spilts.
> HTH
> Regards
> Rajesh Peddireddy.
> "Srini" wrote:
>
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment