Saturday, February 25, 2012

Pageheight in PDF

Hello
I would like to export my report to PDF with a pageheight of 21cm.
The report itself has a pageheight of 11cm (Set in
Reportproperties-->Layout)
I dont want to use URL parameters, how do I do.
I have tried to change the Render extension in RSReportServer.Config
but to no avail. Changed it like this (<Extension Name="PDF"
Type="Microsoft.ReportingServices.Rendering.ImageRenderer.PdfReport,Microsoft.ReportingServices.ImageRendering"
page-height="21cm"/>)
Anyone knows if this is feasible, but with another syntax?
Thanks
/MartinYou can change the page height in the report design itself. Go to Report
Properties. (Right-click the report background.)
--
Cheers,
'(' Jeff A. Stucker
\
Business Intelligence
www.criadvantage.com
---
"Martin" <martin.ekeblad@.acandofrontec.com> wrote in message
news:92af98b9.0502150132.8615bc2@.posting.google.com...
> Hello
> I would like to export my report to PDF with a pageheight of 21cm.
> The report itself has a pageheight of 11cm (Set in
> Reportproperties-->Layout)
> I dont want to use URL parameters, how do I do.
> I have tried to change the Render extension in RSReportServer.Config
> but to no avail. Changed it like this (<Extension Name="PDF"
> Type="Microsoft.ReportingServices.Rendering.ImageRenderer.PdfReport,Microsoft.ReportingServices.ImageRendering"
> page-height="21cm"/>)
> Anyone knows if this is feasible, but with another syntax?
> Thanks
> /Martin

Pagefile for SQL Server

How important it is to work or configure pagefile for SQL Server ? We have
never bothered to look into it, but was just wondering if there are any
performance enhancements one could make by working on pagefile settings?Hi Hassan
For a server dedicated to SQL Server you should not see a great deal of
paging to the page file, but that does not mean that you should not still
follow best practices such as putting the page file on it's own dedicated
fast disc or making it large enough to hold the memory dump.
John
"Hassan" wrote:
> How important it is to work or configure pagefile for SQL Server ? We have
> never bothered to look into it, but was just wondering if there are any
> performance enhancements one could make by working on pagefile settings?
>
>|||> best practices such as putting the page file on it's own dedicated
> fast disc
I'm not sure this is really best practice from a practical viewpoint. I
wonder how many people do this. That's not the practice I have encountered.
Pagefiles are typically put on the system drives (customarily C and D in most
cases), which are typical a two-disk mirrored set.
Linchi
"John Bell" wrote:
> Hi Hassan
> For a server dedicated to SQL Server you should not see a great deal of
> paging to the page file, but that does not mean that you should not still
> follow best practices such as putting the page file on it's own dedicated
> fast disc or making it large enough to hold the memory dump.
> John
> "Hassan" wrote:
> > How important it is to work or configure pagefile for SQL Server ? We have
> > never bothered to look into it, but was just wondering if there are any
> > performance enhancements one could make by working on pagefile settings?
> >
> >
> >|||Hi Linchi
In my experience it does tend to be the first compromise, but that may be
because it isn't even considered!
John
"Linchi Shea" wrote:
> > best practices such as putting the page file on it's own dedicated
> > fast disc
> I'm not sure this is really best practice from a practical viewpoint. I
> wonder how many people do this. That's not the practice I have encountered.
> Pagefiles are typically put on the system drives (customarily C and D in most
> cases), which are typical a two-disk mirrored set.
> Linchi
> "John Bell" wrote:
> > Hi Hassan
> >
> > For a server dedicated to SQL Server you should not see a great deal of
> > paging to the page file, but that does not mean that you should not still
> > follow best practices such as putting the page file on it's own dedicated
> > fast disc or making it large enough to hold the memory dump.
> >
> > John
> >
> > "Hassan" wrote:
> >
> > > How important it is to work or configure pagefile for SQL Server ? We have
> > > never bothered to look into it, but was just wondering if there are any
> > > performance enhancements one could make by working on pagefile settings?
> > >
> > >
> > >

Pagefile and Paging

My DBA sees paging ( high pages/sec) and thinks of increasing pagefile as a
solution. He believes the pagefile settings should be 1.5 times the amount
of physical RAM which I agree but I seem to find it hard to correlate paging
with pagefile increase.
Also under what conditions would one need ot consider increasing the size of
the pagefile if its not set to 1.5 * Physical RAM ?
We are using SQL 2000/2005
Thank you.
"F" <f@.hotmail.com> wrote in message
news:uBNVAg3lIHA.5684@.TK2MSFTNGP03.phx.gbl...
> My DBA sees paging ( high pages/sec) and thinks of increasing pagefile as
> a solution. He believes the pagefile settings should be 1.5 times the
> amount of physical RAM which I agree but I seem to find it hard to
> correlate paging with pagefile increase.
>
He's 1/2 right. You need more memory. But increasing the pagefile won't
make a difference.
Basically SQL Server can page, or it can ask the OS to page to disk. Both
involve disk I/O.
Get more memory or rewrite your queries.

> Also under what conditions would one need ot consider increasing the size
> of the pagefile if its not set to 1.5 * Physical RAM ?
> We are using SQL 2000/2005
> Thank you.
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
|||Increasing the page file size generally would not be the solution to hard
paging. I'd first try to determine whether hard paging is from the SQL Server
process (i.e. whether Windows is paging out the workign set of the SQL Server
process). If that's the case, try to find whether there is any other
processes that are consuming memory and caused paging.
It's also possible that you may be running into a SQL Server bug. For
instance, http://support.microsoft.com/kb/884593 or
http://support.microsoft.com/kb/918483 are examples. I'm not saying that they
apply to your case, but want to point this out as a possibility.
Linchi
"F" wrote:

> My DBA sees paging ( high pages/sec) and thinks of increasing pagefile as a
> solution. He believes the pagefile settings should be 1.5 times the amount
> of physical RAM which I agree but I seem to find it hard to correlate paging
> with pagefile increase.
> Also under what conditions would one need ot consider increasing the size of
> the pagefile if its not set to 1.5 * Physical RAM ?
> We are using SQL 2000/2005
> Thank you.
>
|||I too agree that increasing the pagefile size will not help at all. Your
DBA's first thought should have been "what is causing the paging" and not
how to get around it. SQL Server is designed to do as much as possible to
avoid paging to begin with. It is likely you have other applications on the
server than SQL Server that require some memory and SQL Server is set to use
most of it. If that is the case you may be able to avoid the paging by
setting the MAX Memory setting in SQL Server to leave room for the other
apps. Adding additional memory may also be an option but you still have to
consider how all the apps play together.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"F" <f@.hotmail.com> wrote in message
news:uBNVAg3lIHA.5684@.TK2MSFTNGP03.phx.gbl...
> My DBA sees paging ( high pages/sec) and thinks of increasing pagefile as
> a solution. He believes the pagefile settings should be 1.5 times the
> amount of physical RAM which I agree but I seem to find it hard to
> correlate paging with pagefile increase.
> Also under what conditions would one need ot consider increasing the size
> of the pagefile if its not set to 1.5 * Physical RAM ?
> We are using SQL 2000/2005
> Thank you.

Pagefile and Paging

My DBA sees paging ( high pages/sec) and thinks of increasing pagefile as a
solution. He believes the pagefile settings should be 1.5 times the amount
of physical RAM which I agree but I seem to find it hard to correlate paging
with pagefile increase.
Also under what conditions would one need ot consider increasing the size of
the pagefile if its not set to 1.5 * Physical RAM ?
We are using SQL 2000/2005
Thank you."F" <f@.hotmail.com> wrote in message
news:uBNVAg3lIHA.5684@.TK2MSFTNGP03.phx.gbl...
> My DBA sees paging ( high pages/sec) and thinks of increasing pagefile as
> a solution. He believes the pagefile settings should be 1.5 times the
> amount of physical RAM which I agree but I seem to find it hard to
> correlate paging with pagefile increase.
>
He's 1/2 right. You need more memory. But increasing the pagefile won't
make a difference.
Basically SQL Server can page, or it can ask the OS to page to disk. Both
involve disk I/O.
Get more memory or rewrite your queries.
> Also under what conditions would one need ot consider increasing the size
> of the pagefile if its not set to 1.5 * Physical RAM ?
> We are using SQL 2000/2005
> Thank you.
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||Increasing the page file size generally would not be the solution to hard
paging. I'd first try to determine whether hard paging is from the SQL Server
process (i.e. whether Windows is paging out the workign set of the SQL Server
process). If that's the case, try to find whether there is any other
processes that are consuming memory and caused paging.
It's also possible that you may be running into a SQL Server bug. For
instance, http://support.microsoft.com/kb/884593 or
http://support.microsoft.com/kb/918483 are examples. I'm not saying that they
apply to your case, but want to point this out as a possibility.
Linchi
"F" wrote:
> My DBA sees paging ( high pages/sec) and thinks of increasing pagefile as a
> solution. He believes the pagefile settings should be 1.5 times the amount
> of physical RAM which I agree but I seem to find it hard to correlate paging
> with pagefile increase.
> Also under what conditions would one need ot consider increasing the size of
> the pagefile if its not set to 1.5 * Physical RAM ?
> We are using SQL 2000/2005
> Thank you.
>|||I too agree that increasing the pagefile size will not help at all. Your
DBA's first thought should have been "what is causing the paging" and not
how to get around it. SQL Server is designed to do as much as possible to
avoid paging to begin with. It is likely you have other applications on the
server than SQL Server that require some memory and SQL Server is set to use
most of it. If that is the case you may be able to avoid the paging by
setting the MAX Memory setting in SQL Server to leave room for the other
apps. Adding additional memory may also be an option but you still have to
consider how all the apps play together.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"F" <f@.hotmail.com> wrote in message
news:uBNVAg3lIHA.5684@.TK2MSFTNGP03.phx.gbl...
> My DBA sees paging ( high pages/sec) and thinks of increasing pagefile as
> a solution. He believes the pagefile settings should be 1.5 times the
> amount of physical RAM which I agree but I seem to find it hard to
> correlate paging with pagefile increase.
> Also under what conditions would one need ot consider increasing the size
> of the pagefile if its not set to 1.5 * Physical RAM ?
> We are using SQL 2000/2005
> Thank you.

pagefile and memory advice?

Hello, I've been reading about this topic, and I've gotten myself more
confused, not less.

We have a single-processor license SQL Server Standard 2005 (xeon 2.8
ghz) with 4 GB RAM in Windows Server 2003 SP1 Standard.

I turned on the /3GB switch in boot.ini but not PAE or AWE, would it be
good to have either one??

Quote:

Originally Posted by

>From what I have seen so far, these are advanced settings for bigger


servers or databases than we presently have.

The SQL server is for Great Plains accounting, the largest db is around
3 GB now, and probably grows a GB per year.

We have Abra Suite running as well, someday this will get changed over
to a SQL database, it is FoxPro for now.

I know not to have IIS or other apps running on this box, other than a
couple Access files that are accessed by client apps.

We expect to make more use of this server in the coming years, to
contain intranet database files and web app files and maybe even
SharePoint, maybe someday we need a bigger server..

I was advised to remove or reduce the pagefile size, how should I
determine how big the pagefile should be?? I am thinking 1 GB down from
4 GB.

Thank you for reading this, TomOn 19.12.2006 15:48, tlyczko wrote:

Quote:

Originally Posted by

Hello, I've been reading about this topic, and I've gotten myself more
confused, not less.
>
We have a single-processor license SQL Server Standard 2005 (xeon 2.8
ghz) with 4 GB RAM in Windows Server 2003 SP1 Standard.
>
I turned on the /3GB switch in boot.ini but not PAE or AWE, would it be
good to have either one??
>

Quote:

Originally Posted by

>>From what I have seen so far, these are advanced settings for bigger


servers or databases than we presently have.
>
The SQL server is for Great Plains accounting, the largest db is around
3 GB now, and probably grows a GB per year.
>
We have Abra Suite running as well, someday this will get changed over
to a SQL database, it is FoxPro for now.
>
I know not to have IIS or other apps running on this box, other than a
couple Access files that are accessed by client apps.
>
We expect to make more use of this server in the coming years, to
contain intranet database files and web app files and maybe even
SharePoint, maybe someday we need a bigger server..
>
I was advised to remove or reduce the pagefile size, how should I
determine how big the pagefile should be?? I am thinking 1 GB down from
4 GB.


That's not exactly a MS SQL Server question. Generally you should set
max memory *in* SQL Server to be not more than physically available.
Other than that I have no advice to offer.

Regards

robert|||In my experience with ~5GB Great Plains, 2GB memory is more than enough. I
would recommend against sharing an accounting server with any other
non-accounting applications for security reasons.

In advising on pagefile size we would need your disk layout, RAID level,
etc.

I prefer multiple fixed size, unfragmented, paging files spread across all
disk sets except the ones containing the transaction logs and tempdb. I also
size my server so that the paging file is rarely, if ever, used.

The pagefile is by default sized to do a memory dump if the server fails. I
have never met anyone who has actually used this memory dump for server post
mortem, but I'm sure someone has... However, if space is so short that 3GB
would matter then you likely need to get more and/or bigger disks. I prefer

Quote:

Originally Posted by

50% free space on all disk sets.


"tlyczko" <tlyczko@.gmail.comwrote in message
news:1166539739.038505.10810@.80g2000cwy.googlegrou ps.com...

Quote:

Originally Posted by

Hello, I've been reading about this topic, and I've gotten myself more
confused, not less.
>
We have a single-processor license SQL Server Standard 2005 (xeon 2.8
ghz) with 4 GB RAM in Windows Server 2003 SP1 Standard.
>
I turned on the /3GB switch in boot.ini but not PAE or AWE, would it be
good to have either one??
>

Quote:

Originally Posted by

>>From what I have seen so far, these are advanced settings for bigger


servers or databases than we presently have.
>
The SQL server is for Great Plains accounting, the largest db is around
3 GB now, and probably grows a GB per year.
>
We have Abra Suite running as well, someday this will get changed over
to a SQL database, it is FoxPro for now.
>
I know not to have IIS or other apps running on this box, other than a
couple Access files that are accessed by client apps.
>
We expect to make more use of this server in the coming years, to
contain intranet database files and web app files and maybe even
SharePoint, maybe someday we need a bigger server..
>
I was advised to remove or reduce the pagefile size, how should I
determine how big the pagefile should be?? I am thinking 1 GB down from
4 GB.
>
Thank you for reading this, Tom
>

|||Russ Rose wrote:

Quote:

Originally Posted by

In my experience with ~5GB Great Plains, 2GB memory is more than enough. I
would recommend against sharing an accounting server with any other
non-accounting applications for security reasons.
>
In advising on pagefile size we would need your disk layout, RAID level,
etc.
>
I prefer multiple fixed size, unfragmented, paging files spread across all
disk sets except the ones containing the transaction logs and tempdb. I also
size my server so that the paging file is rarely, if ever, used.
>
The pagefile is by default sized to do a memory dump if the server fails. I
have never met anyone who has actually used this memory dump for server post
mortem, but I'm sure someone has... However, if space is so short that 3GB
would matter then you likely need to get more and/or bigger disks. I prefer

Quote:

Originally Posted by

50% free space on all disk sets.


Hello, thank you for replying.

Disk space is not critical per se, I am inquiring about pagefile size
etc., SQL is doing well at managing itself for us, because someone in
another NG suggested I should eliminate or reduce the pagefile, it's
currently 4 GB when the memory has been increased to 4GB.

Thanks, Tom

Paged results from SQL Query?

I have been searching this topic on and for quite some time and can't seem to find a decent answer. Is it feasible to do your paging strictly from a SQL query? Pass the query the pagesize, what page to return and what to sort by?

Hi,

If you are talking about datagrid customer paging, here is a solution I read a few weeks ago.

It is through a SP with dynamic sql statement. Following that article, a user gave another approach without dynamanic sql for a known table. Here is the web link for that article at eggheadcafe.com.

"Custom DataGrid Paging at the Server" by Dr.Bromberg

http://www.eggheadcafe.com/articles/20060109.asp

FYI

If there is not what you are looking for, please post back. ( I don't know whether I inderstand your question correctly about "paging strictly from a SQL query?" )

Regards,

Limno

|||

If you are using 2005 you can, a new ROW_NUMBER() function has been added so you could potentially use it to get rows say from 10 to 20. I have to say I have never used it, but after a quick google:

SELECT Description, Date
FROM (SELECT ROW_NUMBER() OVER (ORDER BY Date DESC)
AS Row, Description, Date FROM LOG)
AS LogWithRowNumbers
WHERE Row>=1 AND Row<=10

I dont know how flexible it is I have to say, but hope that helps!

W

|||That was pretty much exactly what I was looking for. I want to avoid pulling 1000's of results when the user may only be looking at 20 or 30 of them. The other discussions I have read involve building a sorted temporary table with an autonumber column then pulling the page of data from there using < and > on the numbered column. I am concerned that this approach would be bad for performance. The link you have given seems like a very good solution!|||It would be even better if someone could find out how to parameterize the ORDER BY clause, so you can enable sorting on a multi-column table. I've been trying to figure it out for a couple of days, but the CASE syntax breaks down if the columns have different data types...

Paged Result Sets

What is the recommended mechanism for selecting paged results from SQL.

Presently I pass various params including the request Max Items Per Page and the requested page.

The I execute the query as a count with the search params.

Then comes the paging logic, which validates the page number against the request page and number of hits etc.

Then a temp table and record variables are created for the results.

Then I run the query again with a cursor and select the appropriate Items into the temp table based on the paging values (First Item and Last Item).

Then I return the temp table & some additional return params with the Total Hits etc.

The Stored procedure is accessed via an ADO.Net client and the system.data.IDBReader populates a .Net strongly typed collection and is for read only display.

Thanks for any input,

Martin.

hi martin,

i think this is a front end issue not SQL's

vb.net objects like grids, detailsview and formview

supports paging builtin to them without the need of relying

to Sql server.

you can read the data into the dataset and present it with vb using

objects that support paging.

regards,

joey

|||

Hi Joey,

I am more than capable with ADO.Net. This is not a front end issue.

I have say 200,000 records and I want 15 items, page 30 in a result set displaying 15 items per page. I do not wish to return 450 items to a dataset or to a datareader that has to do multiple round trips to the server to get top the records I require.

I want the SPROC to return the 15 items that I am requesting. Do you understand what I am saying taking this in context with what I have said above?

I can presently do this. Though, I am looking for some one of you SQL Pro's to tell how I should be doing it.

|||

edited

hi martin,

sorry just clarifying. Anyway try this

use northwind

create proc pagemynorthwindorders(@.page int)
as
declare @.pagesize int
select @.pagesize=15

select IDENTITY(int, 1,1) AS ID_Num ,str(orderId)as orderid
into #dummyorders from orders
select * from orders where orderid in
(
select orderid from #dummyorders orders
where id_num between (@.page-1)*@.pagesize and (@.page)*@.pagesize
)

exec pagemynorthwindorders 1

regards,

joey

|||

Hello again!

Looking at this this means that I am selecting all - say using Top 1000 - one thousand records into a temp table?

into #dummyorders I am unfamiliar with this syntax.

Inner queries and stuff! Is this more effeective than the Cursor approach? Is it possible you could break out the SPROC above a little with some comments.

It is much appreciated.

|||

no problem

use northwind

create proc pagemynorthwindorders(@.page int)
as
declare @.pagesize int
select @.pagesize=15 -- in case you want to change your paging size

-- create a temp table with its own identity column starting from 1

-- since order id has its own identity i need translate it to a string using str

-- there can only be one identity column in the table

-- i need only the pk (orderid) since it can identify the records i need

-- the #dummyorders record returns my own id_num and the orderid

-- id_num shall be used for paging , orderid is to identify what records

belong to the page

select IDENTITY(int, 1,1) AS ID_Num ,str(orderId)as orderid
into #dummyorders from orders

-- this get the records from orderid that exist in the required page


select * from orders where orderid in
(
select orderid from #dummyorders orders
where id_num between (@.page-1)*@.pagesize and (@.page)*@.pagesize
)

--(@.page-1)*@.pagesize+1 and (@.page)*@.pagesize

let say you want page 2 it gets the record between

-- (2-1)*15+1 and (2*15)

-- between 16 and 30 which is actually page 2

--at the end of the procedure #dummyorders destroys itself

exec pagemynorthwindorders 1

|||

final query, just improve it to suite you needs

use northwind


alter proc pagemynorthwindorders(@.page int)
as
declare @.pagesize int
select @.pagesize=15
select IDENTITY(int, 1,1) AS ID_Num ,str(orderId)as orderid
into #dummyorders from orders
select * from orders where orderid in
(
select orderid from #dummyorders orders
where id_num between (@.page-1)*@.pagesize+1 and (@.page)*@.pagesize
)

go

exec pagemynorthwindorders 2

|||

This is a very common question and there are actually many good answers.

If you are using SQL2005 there are two really cool features you could use:

- the tops now accept a variable: select top @.variable

- the ROW_NUMBER function, that creates an additional column with the ID of the row

Try this:

declare @.ipagesize as int,

@.ipage as int

set @.ipagesize = 5

set @.ipage = 0

select top ( @.ipagesize ) *

from

(

select ROW_NUMBER () over ( order by orderID ) as row_order , *

from orders

)

ordered_set

where ordered_set.row_order > @.ipagesize * @.ipage

|||

Thanks people,

Both are cleaner than my current solution.

Though, is there a way to get an estimate of the total hits from the same statement or must I use an additional count?

Also, I know there is a knew Table or .Table statement for SQL 2005 which I thought may have popped up in the answers. The Top Param non variable limitation was an anoying so that is also valuable info.

|||new!

PageBreak only before odd pages

Hi all,

I have a report with a few subreports and after each subreport I've added a page-break. But I want to skip a page and to leave it blank if the previous subreport ends at an odd page, so the next subreport will start at the next odd page instead of the even page.

I use a transparent rectangle to add page breaks after each subreport, but because I don't have access to the Global.PageNumber variable in the body section of the report, I don't know when the page is even, so a second rectangle with a custom expression for the Visibility property is unuseful.

Does anyone know how to fix this issue?

Thank you and I look forward to seeing some suggestions.

Radu.

Have you found any solution if yes please let me know

PageBreak only before odd pages

Hi all,

I have a report with a few subreports and after each subreport I've added a page-break. But I want to skip a page and to leave it blank if the previous subreport ends at an odd page, so the next subreport will start at the next odd page instead of the even page.

I use a transparent rectangle to add page breaks after each subreport, but because I don't have access to the Global.PageNumber variable in the body section of the report, I don't know when the page is even, so a second rectangle with a custom expression for the Visibility property is unuseful.

Does anyone know how to fix this issue?

Thank you and I look forward to seeing some suggestions.

Radu.

Have you found any solution if yes please let me know

Pagebreak for long texts in a textbox

Hi
I got a report than can grow depending on what information it gets.
My problem is that When I have a large text for a textbox.
Example of the problem:
I have a few textgroups thats gonna be printed out on a pdf. On page 1
we start of with some adresses and stuff and in the middle of the page
its gonna print out a description. The problem is that this description
for this particular case takes up roughly 75% of a A4-page. this means
that on Page 1 the header 'description' prints in the middle of the
page, then it jumps to Page 2 and prints out the text for
'description'. This leaves half of Page 1 blank, which isnt really good
looking. How do I get RS to start filling page 1 with the text, and
then put in a pagebreak in the middle and move the rest of the text
that didnt fit in onto page 2?
And if it can sence so it doesnt break in the middle of a word that
would be nice too :)
Its a ordinary textbox that gets it values from a SQL server.
Hope anyone understood my problem and has a good solution
Regards
Peter Berglundforgot to mention..its RS2000 we use .. :)

Pagebreak before Table and Expression on value Hidden

Hi Newsgroup,
in an Reporting-Services Report i have a table that is visible/hidden by a
specific parameter and it is set the pagebreak before table to true.
Now if the table is shown in the Report (The expression for the value hidden
returns false) the flag "Pagebreak before table" does not work.
If i remove the expression for the value hidden then the flag "Pagebreak
before table" works without any problems.
Seems to be a bug. Is there a fix or workaround for it?! Does SP1 for SQL
Server 2005 help for it?!
Best regards
DanielHi,
I have the same problem here. If anyone knows something please share
Best Regards,
Rui|||I had similar problems with page breaks not working when a table
visibility was being set by an expression using a report parameter
value (using RS2000 SP2). This appears to be a known issue - a quoted
Microsoft response on another forum states "Any conditional visibility
disables page breaks. We hope to make expression-based conditional
visibility (but not toggle-based conditional visibility) compatible
with page breaks in some future version".
I don't think this has been fixed in RS2005. I managed to implement a
(not ideal) work around by nesting tables inside rectangles and
applying the visibility expression to the rectangle, not the table. I
then applied page breaks to other items within the rectangle to
separate the page break attributes and the object visibility setting.
It doesn't work well with export to Excel though, since it affects
layout spacing and prevents separate pages being displayed on different
worksheets
Rui wrote:
> Hi,
> I have the same problem here. If anyone knows something please share
> Best Regards,
> Rui|||i have now the same problem on a ReportingServices 2005 Report.
I have placed the table in a rectangle. On the rectangle i have placed the
Visible/Hidden Expression. On the Table i have placed the page break before
setting.
Now in the report i get some blank pages. I work arround this report many
hours.
Is there no solution to use the pagebreak before and a visible/hidden
expression on one and the same table?!
The blankpages are on the export to pdf.
Thanks
Daniel
"AMasson" wrote:
> I had similar problems with page breaks not working when a table
> visibility was being set by an expression using a report parameter
> value (using RS2000 SP2). This appears to be a known issue - a quoted
> Microsoft response on another forum states "Any conditional visibility
> disables page breaks. We hope to make expression-based conditional
> visibility (but not toggle-based conditional visibility) compatible
> with page breaks in some future version".
> I don't think this has been fixed in RS2005. I managed to implement a
> (not ideal) work around by nesting tables inside rectangles and
> applying the visibility expression to the rectangle, not the table. I
> then applied page breaks to other items within the rectangle to
> separate the page break attributes and the object visibility setting.
> It doesn't work well with export to Excel though, since it affects
> layout spacing and prevents separate pages being displayed on different
> worksheets
> Rui wrote:
> > Hi,
> >
> > I have the same problem here. If anyone knows something please share
> >
> > Best Regards,
> > Rui
>

PageBreak after x items for group, different on first page

I am trying to add a page break for table details after x items with x being different for the first page of report.

I am able to add a page break by adding a group to table with group expression =Int((RowNumber(Nothing)-1)/25). But I can't figure out how to do this where groupexpression on page 1 of report = Int((RowNumber(Nothing)-1)/15) and on all other pages =Int((RowNumber(Nothing)-1)/25).

I tried

=IIF(Globals!PageNumber=1,Int((RowNumber(Nothing)-1)/15),Int((RowNumber(Nothing)-1)/25))

but Globals cannot be used in group expression. I also tried putting group expression in a textbox on the report but it will not let me refer to the textbox.

Any assistance is appreciated!
Chris

references: http://msdn2.microsoft.com/en-us/library/ms157328.aspx , http://msdn2.microsoft.com/en-us/library/ms157328(en-US,SQL.90).aspx

Try a group expression like this:

=iif(RowNumber(Nothing) <= 15, Int((RowNumber(Nothing)-1)/15), Int((RowNumber(Nothing)-1-15)/25))

This should result in 15 rows for the first page, and 25 rows on subsequent pages.

-- Robert

PageAudit Property Incorrect redux - with a twist

I am getting this message: "The header for file myfile.mdf is not a valid
database file header. The PageauditProperty is incorrect." However, this is
happening on a full overwrite restore! That is, the Restore goes through
successfully, but then I get the message when the Restore attempts to start
the database.
I created a fresh backup and the same thing happens! If the database was
corrupt, I would have thought the backup process would fail, not the
restore.
Any clues as what to look at?
Thanks for any tips.> If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
BACKUP does not validate database integrity. Run DBCC CHECKDB on your
source database.
Hope this helps.
Dan Guzman
SQL Server MVP
"Neil W." <neilw@.netlib.com> wrote in message
news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
>I am getting this message: "The header for file myfile.mdf is not a valid
> database file header. The PageauditProperty is incorrect." However, this
> is
> happening on a full overwrite restore! That is, the Restore goes through
> successfully, but then I get the message when the Restore attempts to
> start
> the database.
> I created a fresh backup and the same thing happens! If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
> Any clues as what to look at?
> Thanks for any tips.
>|||Thanks for the suggestion. I was afraid that running CHECKDB would send the
main database into suspect mode if there was some header corruption. Is
that possible? So far it is up and running and I am afraid to do the checkdb
without a good backup.
_______________________________

> If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
BACKUP does not validate database integrity. Run DBCC CHECKDB on your
source database.
Hope this helps.
Dan Guzman
SQL Server MVP
"Neil W." <neilw@.netlib.com> wrote in message
news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
>I am getting this message: "The header for file myfile.mdf is not a valid
> database file header. The PageauditProperty is incorrect." However, this
> is
> happening on a full overwrite restore! That is, the Restore goes through
> successfully, but then I get the message when the Restore attempts to
> start
> the database.
> I created a fresh backup and the same thing happens! If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
> Any clues as what to look at?
> Thanks for any tips.
>|||Hi
Backup will backup the corruption, it it exists. That is as good a "good
backup" as you can get.
Look at DTS'ing the data out into another DB as a safety measure.
Regards
Mike
"Neil W." wrote:

> Thanks for the suggestion. I was afraid that running CHECKDB would send th
e
> main database into suspect mode if there was some header corruption. Is
> that possible? So far it is up and running and I am afraid to do the check
db
> without a good backup.
> _______________________________
>
> BACKUP does not validate database integrity. Run DBCC CHECKDB on your
> source database.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Neil W." <neilw@.netlib.com> wrote in message
> news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
>
>|||Thanks for the reply. Will the database copy wizard be sufficient, or will
that also copy over the corruption?
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
Hi
Backup will backup the corruption, it it exists. That is as good a "good
backup" as you can get.
Look at DTS'ing the data out into another DB as a safety measure.
Regards
Mike
"Neil W." wrote:

> Thanks for the suggestion. I was afraid that running CHECKDB would send
the
> main database into suspect mode if there was some header corruption. Is
> that possible? So far it is up and running and I am afraid to do the
checkdb
> without a good backup.
> _______________________________
>
> BACKUP does not validate database integrity. Run DBCC CHECKDB on your
> source database.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Neil W." <neilw@.netlib.com> wrote in message
> news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
this[vbcol=seagreen]
through[vbcol=seagreen]
was[vbcol=seagreen]
>
>|||Hi Neil,
I wanted to post a quick note to see if you would like additional
assistance or information regarding this particular issue.
Based on my experience, you are encouraged to check your hardware to see
whether there is any data corruption for your HD or RAID Controller. For
example
PRB: SQL Server Backup Performed on Computer Configured with DPT RAID
Controller Card May Be Invalid
http://support.microsoft.com/kb/268481
For your question, DTS won't copy the corrupted data to the destination.
We appreciate your patience and look forward to hearing from you!
Sincerely yours,
Michael Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
---
Get Secure! - http://www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!

PageAudit Property Incorrect redux - with a twist

I am getting this message: "The header for file myfile.mdf is not a valid
database file header. The PageauditProperty is incorrect." However, this is
happening on a full overwrite restore! That is, the Restore goes through
successfully, but then I get the message when the Restore attempts to start
the database.
I created a fresh backup and the same thing happens! If the database was
corrupt, I would have thought the backup process would fail, not the
restore.
Any clues as what to look at?
Thanks for any tips.> If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
BACKUP does not validate database integrity. Run DBCC CHECKDB on your
source database.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Neil W." <neilw@.netlib.com> wrote in message
news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
>I am getting this message: "The header for file myfile.mdf is not a valid
> database file header. The PageauditProperty is incorrect." However, this
> is
> happening on a full overwrite restore! That is, the Restore goes through
> successfully, but then I get the message when the Restore attempts to
> start
> the database.
> I created a fresh backup and the same thing happens! If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
> Any clues as what to look at?
> Thanks for any tips.
>|||Thanks for the suggestion. I was afraid that running CHECKDB would send the
main database into suspect mode if there was some header corruption. Is
that possible? So far it is up and running and I am afraid to do the checkdb
without a good backup.
_______________________________
> If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
BACKUP does not validate database integrity. Run DBCC CHECKDB on your
source database.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Neil W." <neilw@.netlib.com> wrote in message
news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
>I am getting this message: "The header for file myfile.mdf is not a valid
> database file header. The PageauditProperty is incorrect." However, this
> is
> happening on a full overwrite restore! That is, the Restore goes through
> successfully, but then I get the message when the Restore attempts to
> start
> the database.
> I created a fresh backup and the same thing happens! If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
> Any clues as what to look at?
> Thanks for any tips.
>|||Hi
Backup will backup the corruption, it it exists. That is as good a "good
backup" as you can get.
Look at DTS'ing the data out into another DB as a safety measure.
Regards
Mike
"Neil W." wrote:
> Thanks for the suggestion. I was afraid that running CHECKDB would send the
> main database into suspect mode if there was some header corruption. Is
> that possible? So far it is up and running and I am afraid to do the checkdb
> without a good backup.
> _______________________________
> > If the database was
> > corrupt, I would have thought the backup process would fail, not the
> > restore.
> BACKUP does not validate database integrity. Run DBCC CHECKDB on your
> source database.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Neil W." <neilw@.netlib.com> wrote in message
> news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
> >I am getting this message: "The header for file myfile.mdf is not a valid
> > database file header. The PageauditProperty is incorrect." However, this
> > is
> > happening on a full overwrite restore! That is, the Restore goes through
> > successfully, but then I get the message when the Restore attempts to
> > start
> > the database.
> >
> > I created a fresh backup and the same thing happens! If the database was
> > corrupt, I would have thought the backup process would fail, not the
> > restore.
> >
> > Any clues as what to look at?
> >
> > Thanks for any tips.
> >
> >
>
>|||Thanks for the reply. Will the database copy wizard be sufficient, or will
that also copy over the corruption?
--
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
Hi
Backup will backup the corruption, it it exists. That is as good a "good
backup" as you can get.
Look at DTS'ing the data out into another DB as a safety measure.
Regards
Mike
"Neil W." wrote:
> Thanks for the suggestion. I was afraid that running CHECKDB would send
the
> main database into suspect mode if there was some header corruption. Is
> that possible? So far it is up and running and I am afraid to do the
checkdb
> without a good backup.
> _______________________________
> > If the database was
> > corrupt, I would have thought the backup process would fail, not the
> > restore.
> BACKUP does not validate database integrity. Run DBCC CHECKDB on your
> source database.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Neil W." <neilw@.netlib.com> wrote in message
> news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
> >I am getting this message: "The header for file myfile.mdf is not a valid
> > database file header. The PageauditProperty is incorrect." However,
this
> > is
> > happening on a full overwrite restore! That is, the Restore goes
through
> > successfully, but then I get the message when the Restore attempts to
> > start
> > the database.
> >
> > I created a fresh backup and the same thing happens! If the database
was
> > corrupt, I would have thought the backup process would fail, not the
> > restore.
> >
> > Any clues as what to look at?
> >
> > Thanks for any tips.
> >
> >
>
>|||Hi Neil,
I wanted to post a quick note to see if you would like additional
assistance or information regarding this particular issue.
Based on my experience, you are encouraged to check your hardware to see
whether there is any data corruption for your HD or RAID Controller. For
example
PRB: SQL Server Backup Performed on Computer Configured with DPT RAID
Controller Card May Be Invalid
http://support.microsoft.com/kb/268481
For your question, DTS won't copy the corrupted data to the destination.
We appreciate your patience and look forward to hearing from you!
Sincerely yours,
Michael Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
---
Get Secure! - http://www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!

PageAudit Property Incorrect redux - with a twist

I am getting this message: "The header for file myfile.mdf is not a valid
database file header. The PageauditProperty is incorrect." However, this is
happening on a full overwrite restore! That is, the Restore goes through
successfully, but then I get the message when the Restore attempts to start
the database.
I created a fresh backup and the same thing happens! If the database was
corrupt, I would have thought the backup process would fail, not the
restore.
Any clues as what to look at?
Thanks for any tips.
> If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
BACKUP does not validate database integrity. Run DBCC CHECKDB on your
source database.
Hope this helps.
Dan Guzman
SQL Server MVP
"Neil W." <neilw@.netlib.com> wrote in message
news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
>I am getting this message: "The header for file myfile.mdf is not a valid
> database file header. The PageauditProperty is incorrect." However, this
> is
> happening on a full overwrite restore! That is, the Restore goes through
> successfully, but then I get the message when the Restore attempts to
> start
> the database.
> I created a fresh backup and the same thing happens! If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
> Any clues as what to look at?
> Thanks for any tips.
>
|||Thanks for the suggestion. I was afraid that running CHECKDB would send the
main database into suspect mode if there was some header corruption. Is
that possible? So far it is up and running and I am afraid to do the checkdb
without a good backup.
_______________________________

> If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
BACKUP does not validate database integrity. Run DBCC CHECKDB on your
source database.
Hope this helps.
Dan Guzman
SQL Server MVP
"Neil W." <neilw@.netlib.com> wrote in message
news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
>I am getting this message: "The header for file myfile.mdf is not a valid
> database file header. The PageauditProperty is incorrect." However, this
> is
> happening on a full overwrite restore! That is, the Restore goes through
> successfully, but then I get the message when the Restore attempts to
> start
> the database.
> I created a fresh backup and the same thing happens! If the database was
> corrupt, I would have thought the backup process would fail, not the
> restore.
> Any clues as what to look at?
> Thanks for any tips.
>
|||Hi
Backup will backup the corruption, it it exists. That is as good a "good
backup" as you can get.
Look at DTS'ing the data out into another DB as a safety measure.
Regards
Mike
"Neil W." wrote:

> Thanks for the suggestion. I was afraid that running CHECKDB would send the
> main database into suspect mode if there was some header corruption. Is
> that possible? So far it is up and running and I am afraid to do the checkdb
> without a good backup.
> _______________________________
>
> BACKUP does not validate database integrity. Run DBCC CHECKDB on your
> source database.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Neil W." <neilw@.netlib.com> wrote in message
> news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
>
>
|||Thanks for the reply. Will the database copy wizard be sufficient, or will
that also copy over the corruption?
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
Hi
Backup will backup the corruption, it it exists. That is as good a "good
backup" as you can get.
Look at DTS'ing the data out into another DB as a safety measure.
Regards
Mike
"Neil W." wrote:

> Thanks for the suggestion. I was afraid that running CHECKDB would send
the
> main database into suspect mode if there was some header corruption. Is
> that possible? So far it is up and running and I am afraid to do the
checkdb[vbcol=seagreen]
> without a good backup.
> _______________________________
>
> BACKUP does not validate database integrity. Run DBCC CHECKDB on your
> source database.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Neil W." <neilw@.netlib.com> wrote in message
> news:%23UfZF9L9EHA.1452@.TK2MSFTNGP11.phx.gbl...
this[vbcol=seagreen]
through[vbcol=seagreen]
was
>
>
|||Hi Neil,
I wanted to post a quick note to see if you would like additional
assistance or information regarding this particular issue.
Based on my experience, you are encouraged to check your hardware to see
whether there is any data corruption for your HD or RAID Controller. For
example
PRB: SQL Server Backup Performed on Computer Configured with DPT RAID
Controller Card May Be Invalid
http://support.microsoft.com/kb/268481
For your question, DTS won't copy the corrupted data to the destination.
We appreciate your patience and look forward to hearing from you!
Sincerely yours,
Michael Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
Get Secure! - http://www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!

PAGE_VERIFY setting CHECKSUM - how to detect?

Does anyone know how to detect the CHECKSUM setting of the PAGE_VERIFY database option (2005 only)?

BOL (ALTER DATABASE) includes the following statement:


PAGE_VERIFY { CHECKSUM | TORN_PAGE_DETECTION | NONE }

The current setting of this option can be determined by examining the page_verify_option column in the sys.databases catalog view or the IsTornPageDetectionEnabled property of the DATABASEPROPERTYEX function.


However, there is no column named page_verify_option in the view sys.databases, and DATABASEPROPERTYEX('IsTornPageDetectionEnabled') does not discriminate between the settings CHECKSUM and NONE (it returns 0 for both)!

Actually, I was looking in sys.sysdatabases! sys.databases is just fine....

Page Width view AS default

Hello all,

When viewing a report, there is an option in the left corner for the size ie. 100%, 150%, page width, whole page etc from the dropdown list.

I was wondering is there a way to make the ''Page Width" option the default view for a particular report or site ?

There is no way to set the default zoom within a report. However, if you can call your reports from a URL, you can use the Zoom parameter and specify Page Width as the value. See Using URL Access Parameters for more information:

http://technet.microsoft.com/en-us/library/ms152835.aspx

-Jessica

Page Width and or Margin problems using the new print from web

When I print directly from the web (report manager) I get an extra page for
each report page. The behavior is similar to what you'd get if you had the
page width and margins adjusted incorrectly. However, the reports export to
PDF correctly and print from within VS correctly in the print preview. Is
there a known issue with the new print from web functionality?
I have the page size set to 8.5" by 11" and all margins set to .25". The
body size is set to 8" (so that if you add .5" worth of margin to get 8.5"
overall)
Any ideas?We're experiencing the same problem too, with reports displaying an extra
page, or data appearing on next page, when using client-side printing.
We've found that the default margins (using the client-side print button) is
.500 on all sides (left, right, top & bottom). We have to change the
margins each time, before printing the report, which is too much hassle.
Is there any way to change the default margins of the client-side printing
control?
TIA.
Marlon
"john" <john@.discussions.microsoft.com> wrote in message
news:99A04411-1EF1-49C8-BBBD-E6869DC552B1@.microsoft.com...
> When I print directly from the web (report manager) I get an extra page
> for
> each report page. The behavior is similar to what you'd get if you had
> the
> page width and margins adjusted incorrectly. However, the reports export
> to
> PDF correctly and print from within VS correctly in the print preview. Is
> there a known issue with the new print from web functionality?
> I have the page size set to 8.5" by 11" and all margins set to .25". The
> body size is set to 8" (so that if you add .5" worth of margin to get 8.5"
> overall)
> Any ideas?

Page width and height setting problem

We have a problem in page layout. From the report properties, we set the page width and length. But previewing the page finds the page is much longer than setting in the properties. Exporting the page to PDF gets different page size. That totally messes up our page number. Anyone had this problem before and knows why.

I appreciate it if anybody can help. Thanks.

make sure you added the borders. The size of the report (not the actual paper size) will be added to the border size which might exceed in some cases the PDF (paper) size. Therefore some extra page braks will occur, leading to broken documents or empty pages.

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de

|||

Thanks for the reply Jens. I set the page margin all four sides as 0.25, page width = 8.5in and length = 11in. When I view the page, I have two pages display on screen. And the page width and length does not look like what I set in page properties. Exporting to PDF gets five pages. And when I open PDF, I get a "A drawing error occurred" message.

The width is OK. The problem is length. How can I fix page length?

Thanks.

|||Hi,

just to be sure, the sum of borders and width does not exceed 8,5 / 11 inches, right ? You have to differ between the margins of the Report and the actual paper size. If the Report does not dit on the paper size, you probably will get extra pages. For the mentioned error: Which SQL Server version (Reporting Services) are you using ? If you are not sure you can navigate through the reporting server to http://server/Reportserver . The version informaiton is written on the bottom of the listing. There might be a rendering error which was fixed in later version (assuming that you have an old one)

HTH, Jens K. Suessmeyer.


http://www.sqlserver2005.de

|||

Jens, You are right. After I switched to VS 2005, the PDF file error goes away. But the page size problem still remains.

I believe I have set the page layout carefully. There should not be width exceedings problem. And I did not see blank pages or partial pages. The page layouts in report view and printing view are different. In report view, the first page has 7 items. But in printing view or exported PDF, the first page only has 5 items. Other items are moved over the subsequent pages.

This should not be a problem for most situations. But we have a special requirement needing to use the page number to determine the display of body information. Since footer page number can not be directly referenced (I do not know why MS design in this way), I tried to group by items using ceiling(rownumber(nothing)/5). But this function seems only group by the innest. For example, for three layers' group - address, city, state that expression only group by address only while I need to group it by city (I don't know whether or not I have missed something).

So I had to switch to another way. Run the report two times, first time get the items and page number into database. Second time read from database the stored information to display it correctly. But if the printed page layout and the run time page layout are different, I can not do it in this way. That is why I need to figure this out.

Thanks for the suggestions.

|||The pagination system is different between the different renderers.

HTML is a "soft" page layout -- in other words, it will try to use the interactive page size, but will expand as needed to accomodate objects on that page and to avoid splitting objects.

PDF is a physical page layout; the page size is fixed and we cannot expand it, so objects get broken and moved around on the physical pages.

Page numbers will not be the same between different pagination systems. Unfortunately you really can't rely on the number of pages or the page number to determine the data on the page.

Page vs Row locking...

Hello all,

Somewhere on these newsgroups I recall reading that SQL Server 6 and prior
(when they were married with Sybase) used page locking and not row level
locking. Hence you could be locking a lot more records then what you think
when doing an UPDATE or INSERT SQL.

Now I notice that SQL Server 7 and 2000 claim to use row level locking. (As
you can see, I have been out of the SQL arena for some time). So what I'd
like to know if this is all true? Or marketing mumbo-jumbo? Has Microsoft
made changes at the core of their engine to lock rows? I know that other
RDMSs like Interbase have a versioning engine so it was built from the
ground up for concurrence. And I've read that MSs row level locking is a
band-aid on its unchanged core engine, although the author of that message
did not expand further on this.

so I figured I'd ask the experts what the truth is. Any help is much
appreciated.

If indeed it has changed, what sort of test can I run to prove this to my boss?

regards,
-randall sellRandall Sell (randall@.bytewise.nospam.com.au) writes:
> Somewhere on these newsgroups I recall reading that SQL Server 6 and prior
> (when they were married with Sybase) used page locking and not row level
> locking. Hence you could be locking a lot more records then what you think
> when doing an UPDATE or INSERT SQL.
> Now I notice that SQL Server 7 and 2000 claim to use row level locking.
> (As you can see, I have been out of the SQL arena for some time). So
> what I'd like to know if this is all true? Or marketing mumbo-jumbo? Has
> Microsoft made changes at the core of their engine to lock rows? I know
> that other RDMSs like Interbase have a versioning engine so it was built
> from the ground up for concurrence. And I've read that MSs row level
> locking is a band-aid on its unchanged core engine, although the author
> of that message did not expand further on this.

That poster may have been thinking of SQL 6.5, which had a bascially
unchanged engine from 4.x days, but where you could set a table option
to get "insert row locks" which was good for tables with heavy insert
frequency at a hot spot.

But that was very long ago. MS shipped SQL7 in the end of 1998, and SQL7
was almost complete rewrite, and very little of the original Sybase code
survived. There are still page locks in SQL Server, but I have to confess
that I don't know when they are used. Row locks and table locks is what
you usually see. (Table locks when there is no suitable index, or the
query affects the entire table anyway.)

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||thanx for the info Erland
-rs

Erland Sommarskog wrote:

> Randall Sell (randall@.bytewise.nospam.com.au) writes:
>>Somewhere on these newsgroups I recall reading that SQL Server 6 and prior
>>(when they were married with Sybase) used page locking and not row level
>>locking. Hence you could be locking a lot more records then what you think
>>when doing an UPDATE or INSERT SQL.
>>
>>Now I notice that SQL Server 7 and 2000 claim to use row level locking.
>>(As you can see, I have been out of the SQL arena for some time). So
>>what I'd like to know if this is all true? Or marketing mumbo-jumbo? Has
>>Microsoft made changes at the core of their engine to lock rows? I know
>>that other RDMSs like Interbase have a versioning engine so it was built
>>from the ground up for concurrence. And I've read that MSs row level
>>locking is a band-aid on its unchanged core engine, although the author
>>of that message did not expand further on this.
>
> That poster may have been thinking of SQL 6.5, which had a bascially
> unchanged engine from 4.x days, but where you could set a table option
> to get "insert row locks" which was good for tables with heavy insert
> frequency at a hot spot.
> But that was very long ago. MS shipped SQL7 in the end of 1998, and SQL7
> was almost complete rewrite, and very little of the original Sybase code
> survived. There are still page locks in SQL Server, but I have to confess
> that I don't know when they are used. Row locks and table locks is what
> you usually see. (Table locks when there is no suitable index, or the
> query affects the entire table anyway.)
>

Page totals in last page

Friends,
I am working on a report where I have to print all the page totals, in the last page.
Could you please give us some sort of solution.
Regards
JCYou need to use Three formulae
Formula1 Reset having the code

Whileprintingrecords;
numbervar x:=0;
Formula2 Sum having the code
whileprintingrecords;
numbervar x:=Tonumber(x+DatabaseField);
Formula3 Display having the code
whileprintingrecords;
numbervar x;
Place Formula1 at Page Header
Place Formula2 at Details section
Place Formula3 at Page Footer

Now suppress Formula1 and Formula2

page totals in crystal report

hello everybody,
i am a new user to crystel report. i am using the crystal
report with vb. i have a report with some groups and
sub-groups. i want to print pagewise totals on each page.
pl show me the way.
thanks
shrikant
shrikant_smart@.rediffmail.comIn Crystal Reports 8.5, click Insert, Field Object. This will bring up a list of field objects. Expand 'Special Fields' and you can see your options of 'Page Number', 'Total Page Count', 'Page N of M', etc. Just drag and drop onto your report and position where you want it.|||I don't know this is the proper way or not. any way this will display page viz totals.

Create Formula named tot1

code:

WhilePrintingRecords;
NumberVar total;
total := 0

formula 2 : tot2

WhilePrintingRecords;
NumberVar total;
total := total + {amount};

formula 3 : tot3

WhilePrintingRecords;
NumberVar total;

Use the same variable name.

place tot1 is page header section. This will reset the total value to 0 for each and every page.

Place tot2 in details section. This will add the values.

Format both as suppressed.

Place the tot3 in Page Footer section. This will display page viz total.

Page Totals in Crystal Report

I want have Page Totals on a field in my report. Can anybody help me out.There's a section called 'Special Fields' located in the same area where you would add a Field or a Formula to your report. The Special Fields contain fields such as 'Print Date', 'Print Time', 'Page N of M', 'Page Number', 'Total Page Count', etc... In Crystal Reports 8.5, this can be found under Insert, Field Object menu item.

Page Termination Crystal 8.5

I am using Crystal Reports 8.5 and connecting to a SQL Server 7.0 database through and OLE DB connection. I am running a stored procedure that returns close to 8 million records. This part works fine. My problem is during the process where the status bar states "Determining Page Count" the report will terminate prior to obtaining the end of the report. For example this morning the report terminated after 18+ hours on page 1755. This is a monthly report that is run and last month it had about 2000+ pages. The report has terminated anywhere from 1350 pages to 1760. There are no error messages. If you click on the button to go to the end of the report it will go to page 1. The page count remains Page 1 of 4,294,967,295 because it never completes pagination. Has anyone else experienced this problem?I found the solution to the problem. There was simply too many calculations going on for crystal to handle. The report was summing/grouping over 8 million records. I was able to move this to the stored procedure and eliminate the summing in crystal. The report runs much faster!!

Page Subtotals (Not Cumulative)

Hello,

I want to display subtotals for a column only for that page. Like;

Index Value

--

1 4

2 5

Subtotal 9

-

3 1

4 2

Subtotal 3

Total 12

RunningValue gives cumulative totals. I need subtotals for each visible page only. Is there a way to do it ?

Constraints:

I'm using a table. And I shouldn't use page breaks on my report.

Thanks in advance

Insert/ Add the Page Footer to the report. Now, inser another text box and the expression for this will be

=Sum(ReportItems!textbox6.Value)

TextBox6 is the field which contains the value, More information about this can be found at

http://technet.microsoft.com/en-us/library/ms159677.aspx

|||Thank you for your reply, Techquest.

It is solution for what I asked. But I need something different. I shouldn't use page footer or header either. I'm limited with the page body.

page splits/ fragmentation

Hi,
I am inserting data into a table in what I believe is the correct way. The
table is clustered on the date and I am inserting data that 90% of the time
comes in the correct order, ie today I am inserting 2006/04/27 but 10% may b
e
from 2005 or 2004. I think I need a clustered index on the date cause I am
comparing series of dates together.
From the below...
DBCC SHOWCONTIG scanning 'staging_daily_edf_history' table...
Table: 'staging_daily_edf_history' (199671759); index ID: 1, database ID: 5
TABLE level scan performed.
- Pages Scanned........................: 96488
- Extents Scanned.......................: 12157
- Extent Switches.......................: 12707
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 94.91% [12061:12708]
- Logical Scan Fragmentation ..............: 1.87%
- Extent Scan Fragmentation ...............: 1.49%
- Avg. Bytes Free per Page................: 134.5
- Avg. Page Density (full)................: 98.34%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
I believe that my table is not really fragmented. Does anyone else think the
opposite. It worries me though that the insertion become slower and slower a
s
the table icreases in size maybe because of page splits. Any ideas what's th
e
best strategy? should I have my table non clustered instead? Thanks.
Panos.The results show that it is not very fragmented at all and I doubt that page
splits on the clustered index are the issue here. You can also try setting
your fill factor to 90% to remove most of any page splits you may get. How
many non-clustered indexes on this table do you have? Can you give more
details on how you are inserting and how you determine they get slower?
Andrew J. Kelly SQL MVP
"Panos Stavroulis." <PanosStavroulis@.discussions.microsoft.com> wrote in
message news:18D591DA-BCA2-44D8-984A-DE89A4CD5374@.microsoft.com...
> Hi,
> I am inserting data into a table in what I believe is the correct way. The
> table is clustered on the date and I am inserting data that 90% of the
> time
> comes in the correct order, ie today I am inserting 2006/04/27 but 10% may
> be
> from 2005 or 2004. I think I need a clustered index on the date cause I am
> comparing series of dates together.
> From the below...
> DBCC SHOWCONTIG scanning 'staging_daily_edf_history' table...
> Table: 'staging_daily_edf_history' (199671759); index ID: 1, database ID:
> 5
> TABLE level scan performed.
> - Pages Scanned........................: 96488
> - Extents Scanned.......................: 12157
> - Extent Switches.......................: 12707
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 94.91% [12061:12708]
> - Logical Scan Fragmentation ..............: 1.87%
> - Extent Scan Fragmentation ...............: 1.49%
> - Avg. Bytes Free per Page................: 134.5
> - Avg. Page Density (full)................: 98.34%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
> I believe that my table is not really fragmented. Does anyone else think
> the
> opposite. It worries me though that the insertion become slower and slower
> as
> the table icreases in size maybe because of page splits. Any ideas what's
> the
> best strategy? should I have my table non clustered instead? Thanks.
> Panos.|||Hi,
Well there are 2 more indexes. I guess that must be the problem updating the
non clustered indexes. The way I can see it's slow is because I have a recor
d
on the time the file was processed (insertion via iiss) and can see that it
just takes longer and longer. I guess this is what I should expect when I
have 8m rows and 3 indexes.
Cheers,
Panos.
"Andrew J. Kelly" wrote:

> The results show that it is not very fragmented at all and I doubt that pa
ge
> splits on the clustered index are the issue here. You can also try settin
g
> your fill factor to 90% to remove most of any page splits you may get. Ho
w
> many non-clustered indexes on this table do you have? Can you give more
> details on how you are inserting and how you determine they get slower?
> --
> Andrew J. Kelly SQL MVP
>
> "Panos Stavroulis." <PanosStavroulis@.discussions.microsoft.com> wrote in
> message news:18D591DA-BCA2-44D8-984A-DE89A4CD5374@.microsoft.com...
>
>|||8M rows and 3 indexes should not slow down the inserts to a degree that
should be easily detectable. Maybe it is the way in which you are doing the
inserts. Can you post the actual code used for the inserts?
Andrew J. Kelly SQL MVP
"Panos Stavroulis." <PanosStavroulis@.discussions.microsoft.com> wrote in
message news:1BA642CC-5545-4B7F-B44A-056CEEB17004@.microsoft.com...
> Hi,
> Well there are 2 more indexes. I guess that must be the problem updating
> the
> non clustered indexes. The way I can see it's slow is because I have a
> record
> on the time the file was processed (insertion via iiss) and can see that
> it
> just takes longer and longer. I guess this is what I should expect when I
> have 8m rows and 3 indexes.
> Cheers,
> Panos.
>
> "Andrew J. Kelly" wrote:
>|||it is a straight forward insert into table, no conditions or anything, can't
be simpler. It doesn't slow down too much but you can still notice. Well the
actual stats is avg insertion 10secs for the whole process at the beginning
and then it goes to avg 26 secs in the end.
Thanks.
"Andrew J. Kelly" wrote:

> 8M rows and 3 indexes should not slow down the inserts to a degree that
> should be easily detectable. Maybe it is the way in which you are doing th
e
> inserts. Can you post the actual code used for the inserts?
> --
> Andrew J. Kelly SQL MVP
>
> "Panos Stavroulis." <PanosStavroulis@.discussions.microsoft.com> wrote in
> message news:1BA642CC-5545-4B7F-B44A-056CEEB17004@.microsoft.com...
>
>|||10 Seconds for an insert? So this isn't a simple insert then, there must be
more to it. Something you might not think is important may in fact be so
the more details you provide the better chance you will get a proper answer.
Are you sure there are no triggers on the table?
Andrew J. Kelly SQL MVP
"Panos Stavroulis." <PanosStavroulis@.discussions.microsoft.com> wrote in
message news:4F99553F-EEEA-4D19-88E0-CC34B121176B@.microsoft.com...
> it is a straight forward insert into table, no conditions or anything,
> can't
> be simpler. It doesn't slow down too much but you can still notice. Well
> the
> actual stats is avg insertion 10secs for the whole process at the
> beginning
> and then it goes to avg 26 secs in the end.
> Thanks.
> "Andrew J. Kelly" wrote:
>|||I am talking about 20-30,000 rows insertions. Although now it seems to take
longer, more than 2 min. I think it's probably fine. There are no triggers b
y
the way on the table. You think that few odd rows which don't come in order
will not cause a massive problem. Ill also change the fill factor and see if
that's better. Appreciate your help.
Panos.
"Andrew J. Kelly" wrote:

> 10 Seconds for an insert? So this isn't a simple insert then, there must
be
> more to it. Something you might not think is important may in fact be so
> the more details you provide the better chance you will get a proper answe
r.
> Are you sure there are no triggers on the table?
> --
> Andrew J. Kelly SQL MVP
>
> "Panos Stavroulis." <PanosStavroulis@.discussions.microsoft.com> wrote in
> message news:4F99553F-EEEA-4D19-88E0-CC34B121176B@.microsoft.com...
>
>|||You still are not providing any real details as to exactly how you do these
inserts and that makes it very hard to give any solid advice. In my opinion
that is still too long for just 20 or 30K rows. Have you checked for
blocking? What about your disk queues? Is your log file on a separate Raid
1 from any of the data files?
Andrew J. Kelly SQL MVP
"Panos Stavroulis." <PanosStavroulis@.discussions.microsoft.com> wrote in
message news:CE4272A5-1F5B-4CB4-89A0-38CA5BDCC3F0@.microsoft.com...
>I am talking about 20-30,000 rows insertions. Although now it seems to take
> longer, more than 2 min. I think it's probably fine. There are no triggers
> by
> the way on the table. You think that few odd rows which don't come in
> order
> will not cause a massive problem. Ill also change the fill factor and see
> if
> that's better. Appreciate your help.
> Panos.
> "Andrew J. Kelly" wrote:
>

Page splits/ Dirty pages/ Checkpoint

We have few tables in our application. Basically they are used as temporary
tables. Each day they start with no records in it(empty) and as the day goes
they are filled with the data and at the end of the day they will be
truncated to get ready for the next day. We have a high transaction rate
about 4000/sec. When I noticed the page splits /sec counter it is showing
about 130-160 per second. This is driving the checkpoint to take longer time
.
How Can I reduce this high page splits.
Another question is, we have a char(15) column in those tables and that
column is indexed. It is an Id column but is not unique. Each record has an
unique number Id(generated by our app). But we need to seacrh on the CHAR Id
column so indexed on it. This index is creating/making lot of dirty pages.
This also is a contributing reason for the checkpoint to take longer. How ca
n
I make changes to the index so that it would not create/make many pages
dirty? I tried to change it to VARCHAR and there is not much difference.
The check point is taking about 10-15 seconds and it repeats every 60 second
s.
Your suggestion is greatly appreciated.
Thanks.
Thanks.Just give a try with the following info
1. Check the recovery interval option on the system.
2. make sure that the temporary tables have fixed size by using char
rather than
varchar therefore you can reduce the page spilts.
HTH
Regards
Rajesh Peddireddy.
"Srini" wrote:

> We have few tables in our application. Basically they are used as temporar
y
> tables. Each day they start with no records in it(empty) and as the day go
es
> they are filled with the data and at the end of the day they will be
> truncated to get ready for the next day. We have a high transaction rate
> about 4000/sec. When I noticed the page splits /sec counter it is showing
> about 130-160 per second. This is driving the checkpoint to take longer ti
me.
> How Can I reduce this high page splits.
> Another question is, we have a char(15) column in those tables and that
> column is indexed. It is an Id column but is not unique. Each record has a
n
> unique number Id(generated by our app). But we need to seacrh on the CHAR
Id
> column so indexed on it. This index is creating/making lot of dirty pages.
> This also is a contributing reason for the checkpoint to take longer. How
can
> I make changes to the index so that it would not create/make many pages
> dirty? I tried to change it to VARCHAR and there is not much difference.
> The check point is taking about 10-15 seconds and it repeats every 60 seco
nds.
> Your suggestion is greatly appreciated.
> Thanks.
> Thanks.|||It would really help to show the entire DDL for the table including the
indexes. It sounds like your disk subsystem isn't up to the task. If you
are going to have that many transactions you need a fast disk I/O subsystem,
especially for the transaction logs. Is the log file on it's own RAID 1 or
RAID 10 and is the data on a RAID 10?
Andrew J. Kelly SQL MVP
"Srini" <Srini@.discussions.microsoft.com> wrote in message
news:1F6FF490-C19A-4DFE-BDBC-3AD397CC9D5A@.microsoft.com...
> We have few tables in our application. Basically they are used as
> temporary
> tables. Each day they start with no records in it(empty) and as the day
> goes
> they are filled with the data and at the end of the day they will be
> truncated to get ready for the next day. We have a high transaction rate
> about 4000/sec. When I noticed the page splits /sec counter it is showing
> about 130-160 per second. This is driving the checkpoint to take longer
> time.
> How Can I reduce this high page splits.
> Another question is, we have a char(15) column in those tables and that
> column is indexed. It is an Id column but is not unique. Each record has
> an
> unique number Id(generated by our app). But we need to seacrh on the CHAR
> Id
> column so indexed on it. This index is creating/making lot of dirty pages.
> This also is a contributing reason for the checkpoint to take longer. How
> can
> I make changes to the index so that it would not create/make many pages
> dirty? I tried to change it to VARCHAR and there is not much difference.
> The check point is taking about 10-15 seconds and it repeats every 60
> seconds.
> Your suggestion is greatly appreciated.
> Thanks.
> Thanks.|||We have SAN disk system. Probably the HW is good enough, just trying to see
if I can rearrange some things on the database front to make some improvemen
t.
Coming to the DDL, the tables are not temporary tables the data is temporary
in the sense that the data is kept only for the current day and at the end o
f
the day they are truncated. Each table has about 20 columns. some are decima
l
fields some are datatime columns and others are integer and char type which
includes many char(1)'s and two/three columns char(15 to 20)). One of the
char(15) is indexed which is a kind of Id but is not unique there are no
relationships between these tables and other tables. No triggers no views an
d
anything as such. The data comes into to the system gets inserted to these
standalone tables using some stored procedures. And these tables are queried
using some other stored procedures. The major problem to me looks like is
because of the page splits that it is generating and the dirty pages that it
is generating(about 10000 dirty pages). Is there any thing that can be done
on the table or anything else to make things perform better?
Thanks in advance for your suggestion.
"Andrew J. Kelly" wrote:

> It would really help to show the entire DDL for the table including the
> indexes. It sounds like your disk subsystem isn't up to the task. If you
> are going to have that many transactions you need a fast disk I/O subsyste
m,
> especially for the transaction logs. Is the log file on it's own RAID 1 o
r
> RAID 10 and is the data on a RAID 10?
> --
> Andrew J. Kelly SQL MVP
>
> "Srini" <Srini@.discussions.microsoft.com> wrote in message
> news:1F6FF490-C19A-4DFE-BDBC-3AD397CC9D5A@.microsoft.com...
>
>|||Srini wrote:
> We have SAN disk system. Probably the HW is good enough, just trying
> to see if I can rearrange some things on the database front to make
> some improvement.
> Coming to the DDL, the tables are not temporary tables the data is
> temporary in the sense that the data is kept only for the current day
> and at the end of the day they are truncated. Each table has about 20
> columns. some are decimal fields some are datatime columns and others
> are integer and char type which includes many char(1)'s and two/three
> columns char(15 to 20)). One of the char(15) is indexed which is a
> kind of Id but is not unique there are no relationships between these
> tables and other tables. No triggers no views and anything as such.
> The data comes into to the system gets inserted to these standalone
> tables using some stored procedures. And these tables are queried
> using some other stored procedures. The major problem to me looks
> like is because of the page splits that it is generating and the
> dirty pages that it is generating(about 10000 dirty pages). Is there
> any thing that can be done on the table or anything else to make
> things perform better?
>
The reason it would help to see the DDL is because page splits are a
result of a clustered index and inserts that are not in clustered index
order. You can eliminate the page splitting by chaning the clustered
index to non-clustered or inserting the data in clustered index key
order (if that's possible).
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Just because it is a SAN does not mean it is sufficient or configured
properly for your application. I run across more issues SAN related simply
because people tend to ignore the configuration in thinking it can handle
what ever they need. The DDL was to see what we are dealing with and leave
nothing to imagination. It only takes a second to script the table and
indexes but it goes a long way towards letting us see what is actually
there. Not just what you may think is relevant. This is especially true for
the indexes.
Andrew J. Kelly SQL MVP
"Srini" <Srini@.discussions.microsoft.com> wrote in message
news:623BAACD-7754-4BF4-B9ED-5AA2EDC16F8B@.microsoft.com...
> We have SAN disk system. Probably the HW is good enough, just trying to
> see
> if I can rearrange some things on the database front to make some
> improvement.
> Coming to the DDL, the tables are not temporary tables the data is
> temporary
> in the sense that the data is kept only for the current day and at the end
> of
> the day they are truncated. Each table has about 20 columns. some are
> decimal
> fields some are datatime columns and others are integer and char type
> which
> includes many char(1)'s and two/three columns char(15 to 20)). One of the
> char(15) is indexed which is a kind of Id but is not unique there are no
> relationships between these tables and other tables. No triggers no views
> and
> anything as such. The data comes into to the system gets inserted to these
> standalone tables using some stored procedures. And these tables are
> queried
> using some other stored procedures. The major problem to me looks like is
> because of the page splits that it is generating and the dirty pages that
> it
> is generating(about 10000 dirty pages). Is there any thing that can be
> done
> on the table or anything else to make things perform better?
> Thanks in advance for your suggestion.
> "Andrew J. Kelly" wrote:
>|||David is correct but I just want to caution that changing the clustered
index to a nonclustered will not remove page splits. It may reduce them but
a nonclustered index is implemented just like a clustered index and can page
split as well.
Andrew J. Kelly SQL MVP
"David Gugick" <david.gugick-nospam@.quest.com> wrote in message
news:eC3cUzslFHA.3316@.TK2MSFTNGP14.phx.gbl...
> Srini wrote:
> The reason it would help to see the DDL is because page splits are a
> result of a clustered index and inserts that are not in clustered index
> order. You can eliminate the page splitting by chaning the clustered index
> to non-clustered or inserting the data in clustered index key order (if
> that's possible).
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com|||If we try to insert the data in clustered index order, which sounds to me
like a monotonically increasing clustered index, would n't it create hot
spots on the disk there by reducing the throughput? Currently we can't
control the data insert order. But I think I can change it so that I can
create clustered index on the serial number which is sequential and that is
generated by me so the data insertions will be in the clustered index order.
How about the dirty pages created by the other non clustered indexes? When I
run DBCC MEMUSAGE it is showing lot of dirty pages on the pages related to
the non-clustered indexes. How can I reduce the dirty pages on those? I
understand FILLFACTOR will not help here as that option is useful when there
is some data in the table and we are creating indexes on that table.
Thanks.
"David Gugick" wrote:

> Srini wrote:
> The reason it would help to see the DDL is because page splits are a
> result of a clustered index and inserts that are not in clustered index
> order. You can eliminate the page splitting by chaning the clustered
> index to non-clustered or inserting the data in clustered index key
> order (if that's possible).
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>|||Is there any limit on the number of repplies that one can post in a time
frame... This thing is not letting me post mine. Trying to POST again...
I agree SAN may have some issues we are trying to check on that. But by just
looking at the SQL server front 10000 dirty pages per checkpoint, using
default recovery interval(which is 60 seconds), looks like something can be
done there to reduce that huge number of dirty pages.
DDL looks like this:
SET ANSI_PADDING ON
GO
CREATE TABLE
[dbo].[DAILY_DATA1](
[Event_id] [char] (18) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
--This is unique identifier for the data, currently clustered index is
created on this field
[Category_cd] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Type_cd] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Session_id] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Serial_nb] [int] NOT NULL , App generated serial number, which I can use to
create clustered index
[Order_ts] [datetime] NOT NULL ,
[Data_Id_tx] [char] (14) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
--This is the column that we have index on
.
.
.
[Description_tx] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
.
.
.
[Receipt_ts] [datetime] NOT NULL
CONSTRAINT [PK_DD_1_Evt_ID] PRIMARY KEY CLUSTERED
(
[Event_id]
) ON [PRIMARY]
) ON [PRIMARY]
END
GO
CREATE INDEX [IDX_DD_1_Data_Id_tx] ON [dbo].[Daily_Data1]([Data_Id_tx])
WITH FILLFACTOR = 90 ON [PRIMARY]
GO
Thanks.
"Andrew J. Kelly" wrote:

> Just because it is a SAN does not mean it is sufficient or configured
> properly for your application. I run across more issues SAN related simply
> because people tend to ignore the configuration in thinking it can handle
> what ever they need. The DDL was to see what we are dealing with and leav
e
> nothing to imagination. It only takes a second to script the table and
> indexes but it goes a long way towards letting us see what is actually
> there. Not just what you may think is relevant. This is especially true f
or
> the indexes.
> --
> Andrew J. Kelly SQL MVP
>
> "Srini" <Srini@.discussions.microsoft.com> wrote in message
> news:623BAACD-7754-4BF4-B9ED-5AA2EDC16F8B@.microsoft.com...
>
>|||I repplied to thsi in the morning but it did not get posted...
Interval option is set to default - not changed. Default is 60 seconds. If I
increase it, it is taking way long on the checkpoint. If I decrease it
CHCKPOINT occurs too frequently. Both are problematic.
The tables are not temporary but the data is. Data gets inserted as part of
the daily operations and will be truncated in the evening. All the columns
are set to fixed length CHAR fields(to their maximum possible lenghts). Ther
e
are only two/thress columns with CHAR(15), CHAR(18) and CHAR(10) all other
columns are integer, decimal, datetime, CHAR(1) type.
I need to find a way to reduce the number of dirty pages and the number of
page splits. How can I do that?
Thanks.
"Rajesh" wrote:
> Just give a try with the following info
> 1. Check the recovery interval option on the system.
> 2. make sure that the temporary tables have fixed size by using char
> rather than
> varchar therefore you can reduce the page spilts.
> HTH
> Regards
> Rajesh Peddireddy.
> "Srini" wrote:
>

page splits on a clustered identity column ?

Hi all,
we can read in many articles that page splits don't occur with a cluster on
a monotone increasing column (like an identity column) .
with this script, we don't see that : with a clustered idendity column,
the performance monitor shows page splits ( "SQLServer:AccessMethods" ; "Pag
e
Splits/sec") , and their level stays quite stable if the index is rebuilt
with lower fillfactor for new insertions.
with a clustered varchar column, the page splits is higher at the beginning
but decrease with lower fillfactor (page splits are avoided when fillfactor
=
40) : normal behavior.
how to explain the page splits with the clustered identity column ?
how to measure the number of page splits (an not a ratio per second)
occuring during an execution ?
thanks for reading my poor english and your replays,
R.Fauchatre
PS : tests on SQL Server 2000 SP4 Developer Edition (one user only)
====================================
Script
====================================
--
========================================
====================================
=
-- database creation
-- ========================================
==================================
===
USE master
IF (SELECT name FROM sysdatabases WHERE name='TEST_INDEX' ) IS NULL CREATE
DATABASE TEST_INDEX
go
USE TEST_INDEX
-- ========================================
================
-- fill procedures
-- ========================================
================
-- ----
-- string generation
-- ----
print 'procédures creation'
if OBJECT_ID('generate_string') is not null DROP PROC generate_string
go
CREATE PROCEDURE generate_string
@.string varchar(20) OUTPUT
AS
BEGIN
DECLARE @.limit int
DECLARE @.curr_iteration int
SELECT @.limit = round((rand() * 20) + 3, 0)
SELECT @.curr_iteration = 0
SELECT @.string = ''
WHILE @.curr_iteration < @.limit
BEGIN
SELECT @.string = @.string + char(round((rand() * 25) + 1, 0) + 64)
SELECT @.curr_iteration = @.curr_iteration + 1
END
IF SUBSTRING(@.string,1,1) = ' '
BEGIN
SELECT @.string = SUBSTRING(@.string,2,16)
END
END
go
-- ----
-- filling the table (10000 rows)
-- ----
if OBJECT_ID('FillTable') is not null DROP PROC FillTable
go
CREATE PROC FillTable (@.Chaine bit = 0)
AS
BEGIN
SET NOCOUNT ON
DECLARE @.string varchar(20)
DECLARE @.Compteur int
SET @.compteur = 0
WHILE @.compteur < 10000
BEGIN
SET @.compteur = @.compteur + 1
IF @.chaine = 1
BEGIN
EXEC generate_string @.string output
SET @.string = @.string + '_' + CAST(@.compteur as varchar(5))
INSERT dbo.DemoCluster (col2) values (@.string)
END
ELSE
BEGIN
INSERT dbo.DemoCluster DEFAULT VALUES
END
END
END
go
-- ========================================
================
-- Test1 : clustering on an identity coumn
-- ========================================
================
print 'table creation : test with clustered identity column'
IF OBJECTPROPERTY(object_id('dbo.DemoCluster'), 'IsUserTable') = 1
DROP TABLE dbo.DemoCluster
go
CREATE TABLE dbo.DemoCluster
(
col1 int identity CONSTRAINT PK_DemoClusterCol1 PRIMARY KEY ,
col2 varchar(20) CONSTRAINT DemoClusterCol2Default
DEFAULT current_timestamp,
col3 datetime CONSTRAINT DemoClusterCol3Default
DEFAULT getdate(),
col4 char(30) CONSTRAINT DemoClusterCol4Default
DEFAULT suser_name(),
col5 char(30) CONSTRAINT DemoClusterCol5Default
DEFAULT user_name(),
col6 char(100) CONSTRAINT DemoClusterCol6Default
DEFAULT 'valeur longue longue longue longue longue longue longue
longue longue longue longue longue longue ',
col7 varchar(200) CONSTRAINT DemoClusterCol7Default
DEFAULT 'valeur compacte'
)
go
EXEC sp_helpindex 'dbo.DemoCluster'
dbcc dropcleanbuffers
go
-- first filling
print 'first filling'
EXEC FillTable
EXEC sp_spaceused 'dbo.DemoCluster', true
go
-- index recreated with FillFactor = 100 and PAD_INDEX
print 'fillfactor = 100'
CREATE UNIQUE CLUSTERED Index PK_DemoCLusterCol1 ON DemoCLuster(Col1) WITH
FILLFACTOR = 100,PAD_INDEX, DROP_EXISTING
select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
'PK_DemoClustercol1'
go
EXEC FillTable
EXEC sp_spaceused 'dbo.DemoCluster', true
go
-- index recreated with FillFactor = 80 and PAD_INDEX
print 'fillfactor = 80'
CREATE UNIQUE CLUSTERED Index PK_DemoCLusterCol1 ON DemoCLuster(Col1) WITH
FILLFACTOR = 80,PAD_INDEX, DROP_EXISTING
select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
'PK_DemoClustercol1'
go
EXEC FillTable
EXEC sp_spaceused 'dbo.DemoCluster', true
go
-- index recreated with FillFactor = 60 and PAD_INDEX
print 'fillfactor = 60'
CREATE UNIQUE CLUSTERED Index PK_DemoCLusterCol1 ON DemoCLuster(Col1) WITH
FILLFACTOR = 60,PAD_INDEX, DROP_EXISTING
select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
'PK_DemoClustercol1'
go
EXEC FillTable
EXEC sp_spaceused 'dbo.DemoCluster', true
go
-- index recreated with FillFactor = 40 and PAD_INDEX
print 'fillfactor = 40'
CREATE UNIQUE CLUSTERED Index PK_DemoCLusterCol1 ON DemoCLuster(Col1) WITH
FILLFACTOR = 40,PAD_INDEX, DROP_EXISTING
select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
'PK_DemoClustercol1'
go
EXEC FillTable
EXEC sp_spaceused 'dbo.DemoCluster', true
go
-- index recreated with FillFactor = 20 and PAD_INDEX
print 'fillfactor = 20'
CREATE UNIQUE CLUSTERED Index PK_DemoCLusterCol1 ON DemoCLuster(Col1) WITH
FILLFACTOR = 20,PAD_INDEX, DROP_EXISTING
select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
'PK_DemoClustercol1'
go
EXEC FillTable
EXEC sp_spaceused 'dbo.DemoCluster', true
go
-- ========================================
================
-- Test2 : clustering on a varchar column
-- ========================================
================
print 'table creation : test with clustered varchar column'
IF OBJECTPROPERTY(object_id('dbo.DemoCluster'), 'IsUserTable') = 1
DROP TABLE dbo.DemoCluster
go
CREATE TABLE dbo.DemoCluster
(
col1 int identity CONSTRAINT PK_DemoClusterCol1 PRIMARY KEY NONCLUSTERED
,
col2 varchar(20) CONSTRAINT UK_DemoClusterCol2 UNIQUE CLUSTERED,
col3 datetime CONSTRAINT DemoClusterCol3Default
DEFAULT getdate(),
col4 char(30) CONSTRAINT DemoClusterCol4Default
DEFAULT suser_name(),
col5 char(30) CONSTRAINT DemoClusterCol5Default
DEFAULT user_name(),
col6 char(100) CONSTRAINT DemoClusterCol6Default
DEFAULT 'valeur longue longue longue longue longue longue longue
longue longue longue longue longue longue ',
col7 varchar(200) CONSTRAINT DemoClusterCol7Default
DEFAULT 'valeur compacte'
)
go
EXEC sp_helpindex 'dbo.DemoCluster'
dbcc dropcleanbuffers
go
-- first filling
print 'first filling'
EXEC FillTable 1
EXEC sp_spaceused 'dbo.DemoCluster', true
go
-- index recreated with FillFactor = 100 and PAD_INDEX
print 'fillfactor = 100'
CREATE UNIQUE CLUSTERED Index UK_DemoClusterCol2 ON DemoCLuster(Col2) WITH
FILLFACTOR = 100,PAD_INDEX, DROP_EXISTING
select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
'UK_DemoClusterCol2'
go
EXEC FillTable 1
EXEC sp_spaceused 'dbo.DemoCluster', true
go
-- index recreated with FillFactor = 80 and PAD_INDEX
print 'fillfactor = 80'
CREATE UNIQUE CLUSTERED Index UK_DemoClusterCol2 ON DemoCLuster(Col2) WITH
FILLFACTOR = 80,PAD_INDEX, DROP_EXISTING
select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
'UK_DemoClusterCol2'
go
EXEC FillTable 1
EXEC sp_spaceused 'dbo.DemoCluster', true
go
-- index recreated with FillFactor = 60 and PAD_INDEX
print 'fillfactor = 60'
CREATE UNIQUE CLUSTERED Index UK_DemoClusterCol2 ON DemoCLuster(Col2) WITH
FILLFACTOR = 60,PAD_INDEX, DROP_EXISTING
select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
'UK_DemoClusterCol2'
go
EXEC FillTable 1
EXEC sp_spaceused 'dbo.DemoCluster', true
go
-- index recreated with FillFactor = 40 and PAD_INDEX
print 'fillfactor = 40'
CREATE UNIQUE CLUSTERED Index UK_DemoClusterCol2 ON DemoCLuster(Col2) WITH
FILLFACTOR = 40,PAD_INDEX, DROP_EXISTING
select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
'UK_DemoClusterCol2'
go
EXEC FillTable 1
EXEC sp_spaceused 'dbo.DemoCluster', true
go
-- index recreated with FillFactor = 20 and PAD_INDEX
print 'fillfactor = 20'
CREATE UNIQUE CLUSTERED Index UK_DemoClusterCol2 ON DemoCLuster(Col2) WITH
FILLFACTOR = 20,PAD_INDEX, DROP_EXISTING
select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
'UK_DemoClusterCol2'
go
EXEC FillTable 1
EXEC sp_spaceused 'dbo.DemoCluster', true
go
--
========================================
====================================
=
-- drop the database
--
========================================
====================================
=
USE master
IF (SELECT name FROM sysdatabases WHERE name='TEST_INDEX' ) IS NOT NULL DROP
DATABASE TEST_INDEX
go> how to explain the page splits with the clustered identity column ?
With an increasing column value based on IDENTITY or GETDATE(), a clustered
index page split occurs during inserts only when the last page in the table
becomes full. The number of page splits during each insert test is
approximately equal to the number new pages.
The number of new pages is constant because FILLFACTOR only applies when the
index is created. SQL Server does not maintain the specified percentage
afterward. The FILLFACTOR will waste space in this situation unless you
later increase row length with an UPDATE.

> how to measure the number of page splits (an not a ratio per second)
> occuring during an execution ?
This is the difference between the number of pages before/after each test.
Hope this helps.
Dan Guzman
SQL Server MVP
"r.fauchatre" <rfauchatre@.discussions.microsoft.com> wrote in message
news:D690ADCC-9D6E-4570-BB8B-E6948F2E8FCA@.microsoft.com...
> Hi all,
> we can read in many articles that page splits don't occur with a cluster
> on
> a monotone increasing column (like an identity column) .
> with this script, we don't see that : with a clustered idendity column,
> the performance monitor shows page splits ( "SQLServer:AccessMethods" ;
> "Page
> Splits/sec") , and their level stays quite stable if the index is rebuilt
> with lower fillfactor for new insertions.
> with a clustered varchar column, the page splits is higher at the
> beginning
> but decrease with lower fillfactor (page splits are avoided when
> fillfactor =
> 40) : normal behavior.
> how to explain the page splits with the clustered identity column ?
> how to measure the number of page splits (an not a ratio per second)
> occuring during an execution ?
> thanks for reading my poor english and your replays,
> R.Fauchatre
> PS : tests on SQL Server 2000 SP4 Developer Edition (one user only)
> ====================================
> Script
> ====================================
> --
> ========================================
==================================
===
> -- database creation
> -- ========================================
================================
=====
> USE master
> IF (SELECT name FROM sysdatabases WHERE name='TEST_INDEX' ) IS NULL CREATE
> DATABASE TEST_INDEX
> go
> USE TEST_INDEX
> -- ========================================
================
> -- fill procedures
> -- ========================================
================
> -- ----
> -- string generation
> -- ----
> print 'procdures creation'
> if OBJECT_ID('generate_string') is not null DROP PROC generate_string
> go
> CREATE PROCEDURE generate_string
> @.string varchar(20) OUTPUT
> AS
> BEGIN
> DECLARE @.limit int
> DECLARE @.curr_iteration int
> SELECT @.limit = round((rand() * 20) + 3, 0)
> SELECT @.curr_iteration = 0
> SELECT @.string = ''
> WHILE @.curr_iteration < @.limit
> BEGIN
> SELECT @.string = @.string + char(round((rand() * 25) + 1, 0) + 64)
> SELECT @.curr_iteration = @.curr_iteration + 1
> END
> IF SUBSTRING(@.string,1,1) = ' '
> BEGIN
> SELECT @.string = SUBSTRING(@.string,2,16)
> END
> END
> go
> -- ----
> -- filling the table (10000 rows)
> -- ----
> if OBJECT_ID('FillTable') is not null DROP PROC FillTable
> go
> CREATE PROC FillTable (@.Chaine bit = 0)
> AS
> BEGIN
> SET NOCOUNT ON
> DECLARE @.string varchar(20)
> DECLARE @.Compteur int
> SET @.compteur = 0
> WHILE @.compteur < 10000
> BEGIN
> SET @.compteur = @.compteur + 1
> IF @.chaine = 1
> BEGIN
> EXEC generate_string @.string output
> SET @.string = @.string + '_' + CAST(@.compteur as varchar(5))
> INSERT dbo.DemoCluster (col2) values (@.string)
> END
> ELSE
> BEGIN
> INSERT dbo.DemoCluster DEFAULT VALUES
> END
> END
> END
> go
> -- ========================================
================
> -- Test1 : clustering on an identity coumn
> -- ========================================
================
> print 'table creation : test with clustered identity column'
> IF OBJECTPROPERTY(object_id('dbo.DemoCluster'), 'IsUserTable') = 1
> DROP TABLE dbo.DemoCluster
> go
> CREATE TABLE dbo.DemoCluster
> (
> col1 int identity CONSTRAINT PK_DemoClusterCol1 PRIMARY KEY ,
> col2 varchar(20) CONSTRAINT DemoClusterCol2Default
> DEFAULT current_timestamp,
> col3 datetime CONSTRAINT DemoClusterCol3Default
> DEFAULT getdate(),
> col4 char(30) CONSTRAINT DemoClusterCol4Default
> DEFAULT suser_name(),
> col5 char(30) CONSTRAINT DemoClusterCol5Default
> DEFAULT user_name(),
> col6 char(100) CONSTRAINT DemoClusterCol6Default
> DEFAULT 'valeur longue longue longue longue longue longue longue
> longue longue longue longue longue longue ',
> col7 varchar(200) CONSTRAINT DemoClusterCol7Default
> DEFAULT 'valeur compacte'
> )
> go
> EXEC sp_helpindex 'dbo.DemoCluster'
> dbcc dropcleanbuffers
> go
> -- first filling
> print 'first filling'
> EXEC FillTable
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> -- index recreated with FillFactor = 100 and PAD_INDEX
> print 'fillfactor = 100'
> CREATE UNIQUE CLUSTERED Index PK_DemoCLusterCol1 ON DemoCLuster(Col1) WITH
> FILLFACTOR = 100,PAD_INDEX, DROP_EXISTING
> select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
> 'PK_DemoClustercol1'
> go
> EXEC FillTable
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> -- index recreated with FillFactor = 80 and PAD_INDEX
> print 'fillfactor = 80'
> CREATE UNIQUE CLUSTERED Index PK_DemoCLusterCol1 ON DemoCLuster(Col1) WITH
> FILLFACTOR = 80,PAD_INDEX, DROP_EXISTING
> select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
> 'PK_DemoClustercol1'
> go
> EXEC FillTable
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> -- index recreated with FillFactor = 60 and PAD_INDEX
> print 'fillfactor = 60'
> CREATE UNIQUE CLUSTERED Index PK_DemoCLusterCol1 ON DemoCLuster(Col1) WITH
> FILLFACTOR = 60,PAD_INDEX, DROP_EXISTING
> select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
> 'PK_DemoClustercol1'
> go
> EXEC FillTable
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> -- index recreated with FillFactor = 40 and PAD_INDEX
> print 'fillfactor = 40'
> CREATE UNIQUE CLUSTERED Index PK_DemoCLusterCol1 ON DemoCLuster(Col1) WITH
> FILLFACTOR = 40,PAD_INDEX, DROP_EXISTING
> select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
> 'PK_DemoClustercol1'
> go
> EXEC FillTable
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> -- index recreated with FillFactor = 20 and PAD_INDEX
> print 'fillfactor = 20'
> CREATE UNIQUE CLUSTERED Index PK_DemoCLusterCol1 ON DemoCLuster(Col1) WITH
> FILLFACTOR = 20,PAD_INDEX, DROP_EXISTING
> select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
> 'PK_DemoClustercol1'
> go
> EXEC FillTable
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> -- ========================================
================
> -- Test2 : clustering on a varchar column
> -- ========================================
================
> print 'table creation : test with clustered varchar column'
> IF OBJECTPROPERTY(object_id('dbo.DemoCluster'), 'IsUserTable') = 1
> DROP TABLE dbo.DemoCluster
> go
> CREATE TABLE dbo.DemoCluster
> (
> col1 int identity CONSTRAINT PK_DemoClusterCol1 PRIMARY KEY
> NONCLUSTERED,
> col2 varchar(20) CONSTRAINT UK_DemoClusterCol2 UNIQUE CLUSTERED,
> col3 datetime CONSTRAINT DemoClusterCol3Default
> DEFAULT getdate(),
> col4 char(30) CONSTRAINT DemoClusterCol4Default
> DEFAULT suser_name(),
> col5 char(30) CONSTRAINT DemoClusterCol5Default
> DEFAULT user_name(),
> col6 char(100) CONSTRAINT DemoClusterCol6Default
> DEFAULT 'valeur longue longue longue longue longue longue longue
> longue longue longue longue longue longue ',
> col7 varchar(200) CONSTRAINT DemoClusterCol7Default
> DEFAULT 'valeur compacte'
> )
> go
> EXEC sp_helpindex 'dbo.DemoCluster'
> dbcc dropcleanbuffers
> go
> -- first filling
> print 'first filling'
> EXEC FillTable 1
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> -- index recreated with FillFactor = 100 and PAD_INDEX
> print 'fillfactor = 100'
> CREATE UNIQUE CLUSTERED Index UK_DemoClusterCol2 ON DemoCLuster(Col2) WITH
> FILLFACTOR = 100,PAD_INDEX, DROP_EXISTING
> select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
> 'UK_DemoClusterCol2'
> go
> EXEC FillTable 1
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> -- index recreated with FillFactor = 80 and PAD_INDEX
> print 'fillfactor = 80'
> CREATE UNIQUE CLUSTERED Index UK_DemoClusterCol2 ON DemoCLuster(Col2) WITH
> FILLFACTOR = 80,PAD_INDEX, DROP_EXISTING
> select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
> 'UK_DemoClusterCol2'
> go
> EXEC FillTable 1
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> -- index recreated with FillFactor = 60 and PAD_INDEX
> print 'fillfactor = 60'
> CREATE UNIQUE CLUSTERED Index UK_DemoClusterCol2 ON DemoCLuster(Col2) WITH
> FILLFACTOR = 60,PAD_INDEX, DROP_EXISTING
> select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
> 'UK_DemoClusterCol2'
> go
> EXEC FillTable 1
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> -- index recreated with FillFactor = 40 and PAD_INDEX
> print 'fillfactor = 40'
> CREATE UNIQUE CLUSTERED Index UK_DemoClusterCol2 ON DemoCLuster(Col2) WITH
> FILLFACTOR = 40,PAD_INDEX, DROP_EXISTING
> select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
> 'UK_DemoClusterCol2'
> go
> EXEC FillTable 1
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> -- index recreated with FillFactor = 20 and PAD_INDEX
> print 'fillfactor = 20'
> CREATE UNIQUE CLUSTERED Index UK_DemoClusterCol2 ON DemoCLuster(Col2) WITH
> FILLFACTOR = 20,PAD_INDEX, DROP_EXISTING
> select indid, OrigFillFactor, rowcnt, used from sysindexes WHERE name =
> 'UK_DemoClusterCol2'
> go
> EXEC FillTable 1
> EXEC sp_spaceused 'dbo.DemoCluster', true
> go
> --
> ========================================
==================================
===
> -- drop the database
> --
> ========================================
==================================
===
> USE master
> IF (SELECT name FROM sysdatabases WHERE name='TEST_INDEX' ) IS NOT NULL
> DROP
> DATABASE TEST_INDEX
> go
>|||Thanks a lot Dan,
In this situation, is the page split a "normal" page split (with half data
of the full page moving on the new page : it's unusefull in this case ?) o
r
a particular page split to allocate only the new page (and reference it in
the superior level) without moving data of the full page?
R. Fauchatre
"Dan Guzman" wrote:
[vbcol=seagreen]
> With an increasing column value based on IDENTITY or GETDATE(), a clustere
d
> index page split occurs during inserts only when the last page in the tabl
e
> becomes full. The number of page splits during each insert test is
> approximately equal to the number new pages.
> The number of new pages is constant because FILLFACTOR only applies when t
he
> index is created. SQL Server does not maintain the specified percentage
> afterward. The FILLFACTOR will waste space in this situation unless you
> later increase row length with an UPDATE.
>
> This is the difference between the number of pages before/after each test.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "r.fauchatre" <rfauchatre@.discussions.microsoft.com> wrote in message
> news:D690ADCC-9D6E-4570-BB8B-E6948F2E8FCA@.microsoft.com...|||Official word is that they will consider it for the next release. Feel free
to cast your vote:
https://connect.microsoft.com/SQLSe...=1261
48
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Toronto, ON Canada
.
"r.fauchatre" <rfauchatre@.discussions.microsoft.com> wrote in message
news:BE11EA63-33B7-441D-9176-73DCEF73EF89@.microsoft.com...
Thanks a lot Dan,
In this situation, is the page split a "normal" page split (with half data
of the full page moving on the new page : it's unusefull in this case ?)
or
a particular page split to allocate only the new page (and reference it in
the superior level) without moving data of the full page?
R. Fauchatre
"Dan Guzman" wrote:
[vbcol=seagreen]
> With an increasing column value based on IDENTITY or GETDATE(), a
> clustered
> index page split occurs during inserts only when the last page in the
> table
> becomes full. The number of page splits during each insert test is
> approximately equal to the number new pages.
> The number of new pages is constant because FILLFACTOR only applies when
> the
> index is created. SQL Server does not maintain the specified percentage
> afterward. The FILLFACTOR will waste space in this situation unless you
> later increase row length with an UPDATE.
>
> This is the difference between the number of pages before/after each test.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "r.fauchatre" <rfauchatre@.discussions.microsoft.com> wrote in message
> news:D690ADCC-9D6E-4570-BB8B-E6948F2E8FCA@.microsoft.com...