Showing posts with label number. Show all posts
Showing posts with label number. Show all posts

Friday, March 30, 2012

Parameter problem

HI, I have a report that users select a parameter value from a dropdownbox.

I want to be able to limit the number of parameter options depending on who is logged in.

user A can only see values 'ABC', 'DSC', 'BHT''

whereas user B can see values 'MKP','NHJ','BLP'

is this possible? if so how?

TIA

Reporting services provides the "User!UserID" global parameter in all its reports. This populates with the windows user name if I'm not mistaken. You could pass this value as a parameter to the dataset that you're using to populate the dropdown box.|||If the datasource is driven by a view or a stored procedure or a query you can implement row-level security using a mapping table which locks ( locks up down the appropiate values which should not be seen by some people. If you have a static list, this would be too hard to implement / impossible (Nothing is impossible, impossible just takes longer :-) )

Jens K. Suessmeyer.

http://www.sqlserver2005.de
|||

I was hoping to be able to use custom code to populate the parameter dropdownlist/ set the parameter values.

Is this possible? can I reference datasets in my code?

|||

I created a view then set a filter on the dataset to return values based on the User!UserID=UserName field from dataset

then set the parameter to 'From Query' chose the dataset and appropriate field from dataset and everything aok.

|||Welcome to rowlevel security :-)|||

I am trying to do the something similar. In my report i created a stored procedure that stored procedure is passed a parameter CaseNumber. (I am using the Report Veiwer Control in Remote Mode). In Report Designer, I created to datasets the first dataset populates the dropdown list box and it is displayed in the toolbar. The Second dataset takes this parameter (CaseNumber) and brings back the report. Everything works fine in the Report Manager; however, when I run it from frmReport.aspx page the report does not show.

If I run the report without any parameters from the frmReport.aspx page (still using the stored procedure w/o using parameters) the report shows. Can any body help me with this.

Zachary

|||

Fixed the problem - thanks

Zachary

Wednesday, March 28, 2012

Parameter Number is Invalid

I am using Crystal Reports 8.0, with SQL Server.
I am trying to create a Report on a Stored Procedure.
My Stored Procedure is having 3 parameters (2 datetime parameters and 1 integer parameter).
When i select the stored procedure in the initial stage, it asks for the parameters. I tried entering 2050-01-01 00:00:00.000, etc etc ...

But I am getting an error "Parameter Number 1 is invalid" which is the same datetime field.

When I run the stored procedure in the query analyser. Its runs fine:
exec spTravelRpt_Person_chargeno '01-01-1990 00:00:00.000','01-01-2050 00:00:00.000',-1.

Can anyone plzzzz plzzz help me.. bcoz i am stuck.. i cannot move forward..

Ur help will be appreicated.Give only 2050-01-01 to datetime field and when calling the SP from your front end, pass the parameter in the format you want|||Hi Madhi,

I tried giving that too ... just
2050-01-01 ...
but it didnt help me...
Its giving the same error:
Parameter number 1 is invalid.
What do i do ????|||change the parameters datatype to varchar(20) instead of datetime and check.sql

Monday, March 26, 2012

Parameter Information cannot be derived from SQL Statements with sub-select queries

I have the following query that finds the most Serial number of the
most current part manufactured at point 105 (near the end of the
process), then uses that serial number to find out when it started
production at point 39 (the front of the process). Then, it gives a
listing, summarized by part type (CCode), of everything in process from
the time that part entered the line, until it passes point 105. This
gives an accurate snapshot of everthing that is on the production line.
However, I need to modify this query so that the user can enter a start
date and time, and the query will find a part near that time (either
the next part, or the part just before), and produce the same results.
Here is my original query (It works with no issue):
SELECT CCode, COUNT(CCode) AS CCount
FROM [Broadcast] A
WHERE (ReportingPoint = '39') AND
(ProcessDate >= (SELECT W.ProcessDate
FROM [Broadcast] AS W JOIN
(SELECT TOP 1 ProcessDate, SerialNumber
FROM [Broadcast]
WHERE ReportingPoint = '105'
ORDER BY ProcessDate DESC) AS X ON
W.SerialNumber = X.SerialNumber AND
W.ProcessDate < X.ProcessDate
WHERE W.ReportingPoint = '39'))
GROUP BY CCode
ORDER BY CCode
I changed the subquery to get user entry as follows:
(SELECT SerialNumber
FROM [Broadcast]
WHERE ReportingPoint = '105' AND
ProcessDate >= @.GetDateFromUser
ORDER BY ProcessDate DESC) AS X ON
W.SerialNumber = X.SerialNumber AND
W.ProcessDate < X.ProcessDate
But I get the following error:
Parameter Information cannot be derived from SQL Statements with
sub-select queries. Set parameter information before preparing
command.
How can I get this infomation from the user before the sub-select query?Bump
Timothy.Rybak@.gmail.com wrote:
> I have the following query that finds the most Serial number of the
> most current part manufactured at point 105 (near the end of the
> process), then uses that serial number to find out when it started
> production at point 39 (the front of the process). Then, it gives a
> listing, summarized by part type (CCode), of everything in process from
> the time that part entered the line, until it passes point 105. This
> gives an accurate snapshot of everthing that is on the production line.
> However, I need to modify this query so that the user can enter a start
> date and time, and the query will find a part near that time (either
> the next part, or the part just before), and produce the same results.
> Here is my original query (It works with no issue):
> SELECT CCode, COUNT(CCode) AS CCount
> FROM [Broadcast] A
> WHERE (ReportingPoint = '39') AND
> (ProcessDate >=> (SELECT W.ProcessDate
> FROM [Broadcast] AS W JOIN
> (SELECT TOP 1 ProcessDate, SerialNumber
> FROM [Broadcast]
> WHERE ReportingPoint = '105'
> ORDER BY ProcessDate DESC) AS X ON
> W.SerialNumber = X.SerialNumber AND
> W.ProcessDate < X.ProcessDate
> WHERE W.ReportingPoint = '39'))
> GROUP BY CCode
> ORDER BY CCode
> I changed the subquery to get user entry as follows:
> (SELECT SerialNumber
> FROM [Broadcast]
> WHERE ReportingPoint = '105' AND
> ProcessDate >= @.GetDateFromUser
> ORDER BY ProcessDate DESC) AS X ON
> W.SerialNumber = X.SerialNumber AND
> W.ProcessDate < X.ProcessDate
> But I get the following error:
> Parameter Information cannot be derived from SQL Statements with
> sub-select queries. Set parameter information before preparing
> command.
> How can I get this infomation from the user before the sub-select query?

Parameter error

Why do I keep getting this error:

"A number is required here"

I am trying to prompt the user of the report for the a Billing ID, which will refresh the contents of the report.

I am using exactly the same technique and method which is illustrated in the manual:

{VW_BillingItemsConsolidated.BillingAddressID}={?billingID}

Where {VW_BillingItemsConsolidated.BillingAddressID} is the table field and {?billingID} is the forumla I have created.

RegardsNow I have changed the "Parameter Field" "Value Type" to take numerical data the error is no longer showing. Still though, when I enter a number the information on the report doesn't change to reflect that customer. Where am I going wrong?|||I just keeo getting "False" in the field which is inserted onto the report.|||What is the datatype of VW_BillingItemsConsolidated.BillingAddressID?
It should work correctly

Parameter Either Or

Hi

Background

I have a report that can be either date based (start and end dates, shows multiple jobs) or job number based (shows one job). Parameters all setup and it's working, it uses one SP. I have Allow Null selected for each of the three possible entry boxes. SP says IF JobNumber IS NULL Then Do this ELSE Do that

Problem

It looks messy. I have a label on the start date parameter "EITHER Please select a start date:" And a label on the job number parameter "OR Please select a job number:"


The user can then tick/untick NULL and enter either a date or a job number. Any way of making this work a little better? An initial parameter of report type > Job Number Or Date. Then just show the appropriate parameter for example. To do this I'd need to (I think) dynamically hide a parameter, can this be done? Any other ideas?

Cheers

Just have one parameter (string)

You can always use a code block to check formatting and provide msgboxs.

Then just convert the string to the required datatype|||

You can have three parameters. Use first one to decide whether you would want to run the report with parameter 1 or Parameter 2.

So, the out of three user will pick 2 parameters. and in the back end you will use the first one to decide which one to use and which one to discard.....

|||

Thanks for the replies. Adolf - I have 3 parameters so 1 free entry text box isn't really going to work for me.

TechQuest - I have no problem making it work in the back end, I want the front end to display either parameters start & end dates or parameter job number. If I have the initial parameter with options of Date Or Job Number. They select Date, how do I make just the Date parameters appear?

I may not understand exactly what you mean but you say the user will pick 2 out of the 3 parameters and the back end will sort it, but that's what I have now

|||

Unfortunately there is no way for you to achieve a much better user experience. RS does not allow you to show/hide parameters dynamically at run time.

What I suggest is maybe setting the value of the unused parameter something meaningfull to show that it should not be used. A lot depends on your current setup of the parameters i.e is job id a dropdown or a text box? Is date from and to date picker controls or textboxes?

So, based on the previous suggestions, I would say have the following:

Parameter 1 = Filter Type {Job ID, Date Range}

Parameter 2 = Job ID, I suggest you make this a dropdown of valid job ID's. When parameter 1 = Date Range populate this dropdown with just 1 value that says "N/A - Please select a date range".

Parameters 3 & 4 = You have a couple options here, if you make them date pickers then you can't set their values to to infomative text. You could if you make them text boxes but then you wouldn't get the nice functionality of a date picker.

Either way it's not going to be as nice as coding your own UI.

|||

Thanks for that Adam. At least I can stop searching for an 'nice' answer now as I know it's not possible. I'll go down the route you suggest.

Cheers

sql

Parameter Either Or

Hi

Background

I have a report that can be either date based (start and end dates, shows multiple jobs) or job number based (shows one job). Parameters all setup and it's working, it uses one SP. I have Allow Null selected for each of the three possible entry boxes. SP says IF JobNumber IS NULL Then Do this ELSE Do that

Problem

It looks messy. I have a label on the start date parameter "EITHER Please select a start date:" And a label on the job number parameter "OR Please select a job number:"


The user can then tick/untick NULL and enter either a date or a job number. Any way of making this work a little better? An initial parameter of report type > Job Number Or Date. Then just show the appropriate parameter for example. To do this I'd need to (I think) dynamically hide a parameter, can this be done? Any other ideas?

Cheers

Just have one parameter (string)

You can always use a code block to check formatting and provide msgboxs.

Then just convert the string to the required datatype|||

You can have three parameters. Use first one to decide whether you would want to run the report with parameter 1 or Parameter 2.

So, the out of three user will pick 2 parameters. and in the back end you will use the first one to decide which one to use and which one to discard.....

|||

Thanks for the replies. Adolf - I have 3 parameters so 1 free entry text box isn't really going to work for me.

TechQuest - I have no problem making it work in the back end, I want the front end to display either parameters start & end dates or parameter job number. If I have the initial parameter with options of Date Or Job Number. They select Date, how do I make just the Date parameters appear?

I may not understand exactly what you mean but you say the user will pick 2 out of the 3 parameters and the back end will sort it, but that's what I have now

|||

Unfortunately there is no way for you to achieve a much better user experience. RS does not allow you to show/hide parameters dynamically at run time.

What I suggest is maybe setting the value of the unused parameter something meaningfull to show that it should not be used. A lot depends on your current setup of the parameters i.e is job id a dropdown or a text box? Is date from and to date picker controls or textboxes?

So, based on the previous suggestions, I would say have the following:

Parameter 1 = Filter Type {Job ID, Date Range}

Parameter 2 = Job ID, I suggest you make this a dropdown of valid job ID's. When parameter 1 = Date Range populate this dropdown with just 1 value that says "N/A - Please select a date range".

Parameters 3 & 4 = You have a couple options here, if you make them date pickers then you can't set their values to to infomative text. You could if you make them text boxes but then you wouldn't get the nice functionality of a date picker.

Either way it's not going to be as nice as coding your own UI.

|||

Thanks for that Adam. At least I can stop searching for an 'nice' answer now as I know it's not possible. I'll go down the route you suggest.

Cheers

Tuesday, March 20, 2012

Parallel Development of SSIS packages

I have seen a number of posts regarding parallel development of SSIS packages and need some further information.

So far we have been developing SSIS packages along a single development stream and therefore have managed to avoid parallel development of our packages.

However, due to business pressures we will soon have multiple project streams running in parallel, and therefore multiple code branches, as part of that we will definitely need to redevelop the same SSIS packages in parallel. Judging from your post above and some testing we have done this is going to be a nightmare as we cannot merge the code. We can put in place processes to try and mitigate this but there are bound to be issues along the way.

Do you know whether this problem is going to be fixed? We are now using Team Foundation Server but presumably the merge algorythm used is same/similar to that of VSS and therefore very flaky?

However, not only are we having problems with the merging of the XML files, but we also use script tasks within the packages which are precompiled, as the DTSX files contain the binary objects associated with the script source code, if two developers change the same script task in isolated branches the binary is not recompiled as the merge software does not recognise this object.

Do you know whether these issues have been identified and are going to be fixed to be in line with the rest of Microsoft Configuration Managment principles of parallel development?

Many thanks.

[Microsoft follow-up] A question to be addressed here.|||

A. as long as no more then 1 developer works on the same dtsx package at a time there is no problem ( lock it in vss)

B. try to make the packages light, microsoft is not recommending heavy packages .

C. If you would like to merge codes from packages just copy all the elements from one package to the other ( use a container)

|||

In response to your points, I'm still not sure this is a satisfactory situtation for what is supposed to be an enterprise standard tool.

A - The problem we have is that we will have more than 1 developer working on the same package at the same time, and that those versions of packages will have to be deployed at different intervals. This is because of business priorities and is the normal development scenario on oither platforms such as .Net (C#) where you can have multiple branches with multiple developers working on the code. In these scenarios the Config Mgmt tool will wherever possible merge the source code automatically and raise conflicts wherever the code changes are in the same place to enable the Developer/Config Mgmt analyst to determine what cause of action to take. The problem we have here is the capability of the VSS or TFS to merge the DTSX packages is not accurate and our testing has proven that this sometimes works and more often doesn't.

Also say we deploy a package and that is running in production, as soon as it has gone live work on the next stream of development starts and the development team crack on making changes to the package. However, after a month in live a bug is identified in the package and an emergency fix is required to the DTSX package, the support team make this change to the live version to ensure that we fulfill the business needs as they cannot wait for the next development implementation for the fix. How do we merge that support change into the version of the package the development team are working on.... is this a manual process to physically open the development package as well and make the same changes in two places. Obviously this is very risky and introduces potential issues of developers knocking out support changes.

B - Wherever possible we have tried to keeping the packages light but specific ETL processes are always going to be quite intensive and include a lot of code, especially if working with a set of data within a pipeline.

C - When you mention merging the elements from one container to the other, how does this work if both version of packages also create new variables, add new connection managers, logging, checkpointing, error handlers or use the binary compiled objects from script tasks or even change the same script task.

I think the answer at the moment is that this cannot be done consistently and accurately but wanted to know whether Microsoft recognised this limitation of their product and how/if this would be addressed in future as the uptake and usage of SSIS increases, and this problem becomes more of an issue as development and support of ETL systems will be done in parallel.

Thanks.

|||

Hi Tom,

The SSIS team is aware of the issue, and your assessment is accurate. Right now SSIS packages essentially have to be treated as blobs, allowing only one person to work on them at a time because merging the XML is difficult. We have something in the works that should ease some of the problem, but we’re not sure if it will make it into the initial Katmai release at this point.

One alternative I’ve seen for projects/packages which change frequently is to switch to building the packages dynamically with code, instead of graphically in the UI. This makes the merging of changes much easier. However, the downside is that you lose the ease of use of the designer, and there can be a bit of learning curve for the object model.

Thanks,

~Matt

|||

Nice one Matt. Good to know that you guys are thinking about this. I'm interested to know what your " something in the works " is.

-Jamie

Parallel Development of SSIS packages

I have seen a number of posts regarding parallel development of SSIS packages and need some further information.

So far we have been developing SSIS packages along a single development stream and therefore have managed to avoid parallel development of our packages.

However, due to business pressures we will soon have multiple project streams running in parallel, and therefore multiple code branches, as part of that we will definitely need to redevelop the same SSIS packages in parallel. Judging from your post above and some testing we have done this is going to be a nightmare as we cannot merge the code. We can put in place processes to try and mitigate this but there are bound to be issues along the way.

Do you know whether this problem is going to be fixed? We are now using Team Foundation Server but presumably the merge algorythm used is same/similar to that of VSS and therefore very flaky?

However, not only are we having problems with the merging of the XML files, but we also use script tasks within the packages which are precompiled, as the DTSX files contain the binary objects associated with the script source code, if two developers change the same script task in isolated branches the binary is not recompiled as the merge software does not recognise this object.

Do you know whether these issues have been identified and are going to be fixed to be in line with the rest of Microsoft Configuration Managment principles of parallel development?

Many thanks.

[Microsoft follow-up] A question to be addressed here.|||

A. as long as no more then 1 developer works on the same dtsx package at a time there is no problem ( lock it in vss)

B. try to make the packages light, microsoft is not recommending heavy packages .

C. If you would like to merge codes from packages just copy all the elements from one package to the other ( use a container)

|||

In response to your points, I'm still not sure this is a satisfactory situtation for what is supposed to be an enterprise standard tool.

A - The problem we have is that we will have more than 1 developer working on the same package at the same time, and that those versions of packages will have to be deployed at different intervals. This is because of business priorities and is the normal development scenario on oither platforms such as .Net (C#) where you can have multiple branches with multiple developers working on the code. In these scenarios the Config Mgmt tool will wherever possible merge the source code automatically and raise conflicts wherever the code changes are in the same place to enable the Developer/Config Mgmt analyst to determine what cause of action to take. The problem we have here is the capability of the VSS or TFS to merge the DTSX packages is not accurate and our testing has proven that this sometimes works and more often doesn't.

Also say we deploy a package and that is running in production, as soon as it has gone live work on the next stream of development starts and the development team crack on making changes to the package. However, after a month in live a bug is identified in the package and an emergency fix is required to the DTSX package, the support team make this change to the live version to ensure that we fulfill the business needs as they cannot wait for the next development implementation for the fix. How do we merge that support change into the version of the package the development team are working on.... is this a manual process to physically open the development package as well and make the same changes in two places. Obviously this is very risky and introduces potential issues of developers knocking out support changes.

B - Wherever possible we have tried to keeping the packages light but specific ETL processes are always going to be quite intensive and include a lot of code, especially if working with a set of data within a pipeline.

C - When you mention merging the elements from one container to the other, how does this work if both version of packages also create new variables, add new connection managers, logging, checkpointing, error handlers or use the binary compiled objects from script tasks or even change the same script task.

I think the answer at the moment is that this cannot be done consistently and accurately but wanted to know whether Microsoft recognised this limitation of their product and how/if this would be addressed in future as the uptake and usage of SSIS increases, and this problem becomes more of an issue as development and support of ETL systems will be done in parallel.

Thanks.

|||

Hi Tom,

The SSIS team is aware of the issue, and your assessment is accurate. Right now SSIS packages essentially have to be treated as blobs, allowing only one person to work on them at a time because merging the XML is difficult. We have something in the works that should ease some of the problem, but we’re not sure if it will make it into the initial Katmai release at this point.

One alternative I’ve seen for projects/packages which change frequently is to switch to building the packages dynamically with code, instead of graphically in the UI. This makes the merging of changes much easier. However, the downside is that you lose the ease of use of the designer, and there can be a bit of learning curve for the object model.

Thanks,

~Matt

|||

Nice one Matt. Good to know that you guys are thinking about this. I'm interested to know what your " something in the works " is.

-Jamie

Friday, March 9, 2012

Pagination of data

Hi
I am developing a vb2005/sql server 2005 winform app which involves
displaying records in a list, one page at a time. The total number of
records is large. I am wondering if there is a way either in vb/ado or sql
server that automatically pages a certain number of records at a time and
when user scrolls down (or up) pages the next set of records? I guess I can
possibly program it manually but it may be complicated specially when the
records in the next/previous set are different due to the different sort
orders. Ideally I am looking for giving a select statement to include all
records as data source and then expect system to handle any pagination and
bringing only one page of record from server at any one time.
Thanks
RegardsTake a look at this article.
http://www.aspfaq.com/show.asp?id=2120
David Portas
SQL Server MVP
--|||I am doing it for a winform app and asp may not be relevant but I will have
a look.
Thanks
Regards
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:IZidne6mqKZJLJreRVn-ow@.giganews.com...
> Take a look at this article.
> http://www.aspfaq.com/show.asp?id=2120
> --
> David Portas
> SQL Server MVP
> --
>|||Of the various solutions given, most of them are not ASP-specific. Mostly
they use TSQL.
David Portas
SQL Server MVP
--|||Hi John,
I am not a DBA or even half-experienced db developer but, I guess you could
consider achieving your goal by using views. You will still, as you state,
return the whole recordset and possibly 'store it' as a dataset. You can
then create the required views as needed. If there are any DBA's reading
PLEASE don't lecture on the bad practice of returning more records than
required...IT'S NOT MY IDEA! :-) :-)
I know you asked if there was a way to do this 'automatically', but I don't
know of one, other than the built-in methods within the asp datagrid. Sorry
if this is not helpful.
Good luck.
Phil
"John" <John@.nospam.infovis.co.uk> wrote in message
news:eWQ3DddpFHA.2976@.TK2MSFTNGP12.phx.gbl...
> Hi
> I am developing a vb2005/sql server 2005 winform app which involves
> displaying records in a list, one page at a time. The total number of
> records is large. I am wondering if there is a way either in vb/ado or sql
> server that automatically pages a certain number of records at a time and
> when user scrolls down (or up) pages the next set of records? I guess I
> can possibly program it manually but it may be complicated specially when
> the records in the next/previous set are different due to the different
> sort orders. Ideally I am looking for giving a select statement to include
> all records as data source and then expect system to handle any pagination
> and bringing only one page of record from server at any one time.
> Thanks
> Regards
>|||Check out:
http://www.aspfaq.com/show.asp?id=2120
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"Phil G." <Phil@.nospam.com> wrote in message
news:de9hlf$9fl$1@.nwrdmz03.dmz.ncs.ea.ibs-infra.bt.com...
Hi John,
I am not a DBA or even half-experienced db developer but, I guess you could
consider achieving your goal by using views. You will still, as you state,
return the whole recordset and possibly 'store it' as a dataset. You can
then create the required views as needed. If there are any DBA's reading
PLEASE don't lecture on the bad practice of returning more records than
required...IT'S NOT MY IDEA! :-) :-)
I know you asked if there was a way to do this 'automatically', but I don't
know of one, other than the built-in methods within the asp datagrid. Sorry
if this is not helpful.
Good luck.
Phil
"John" <John@.nospam.infovis.co.uk> wrote in message
news:eWQ3DddpFHA.2976@.TK2MSFTNGP12.phx.gbl...
> Hi
> I am developing a vb2005/sql server 2005 winform app which involves
> displaying records in a list, one page at a time. The total number of
> records is large. I am wondering if there is a way either in vb/ado or sql
> server that automatically pages a certain number of records at a time and
> when user scrolls down (or up) pages the next set of records? I guess I
> can possibly program it manually but it may be complicated specially when
> the records in the next/previous set are different due to the different
> sort orders. Ideally I am looking for giving a select statement to include
> all records as data source and then expect system to handle any pagination
> and bringing only one page of record from server at any one time.
> Thanks
> Regards
>

Pagination of data

Hi
I am developing a vb2005/sql server 2005 winform app which involves
displaying records in a list, one page at a time. The total number of
records is large. I am wondering if there is a way either in vb/ado or sql
server that automatically pages a certain number of records at a time and
when user scrolls down (or up) pages the next set of records? I guess I can
possibly program it manually but it may be complicated specially when the
records in the next/previous set are different due to the different sort
orders. Ideally I am looking for giving a select statement to include all
records as data source and then expect system to handle any pagination and
bringing only one page of record from server at any one time.
Thanks
RegardsTake a look at this article.
http://www.aspfaq.com/show.asp?id=2120
--
David Portas
SQL Server MVP
--|||I am doing it for a winform app and asp may not be relevant but I will have
a look.
Thanks
Regards
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:IZidne6mqKZJLJreRVn-ow@.giganews.com...
> Take a look at this article.
> http://www.aspfaq.com/show.asp?id=2120
> --
> David Portas
> SQL Server MVP
> --
>|||Of the various solutions given, most of them are not ASP-specific. Mostly
they use TSQL.
--
David Portas
SQL Server MVP
--|||Hi John,
I am not a DBA or even half-experienced db developer but, I guess you could
consider achieving your goal by using views. You will still, as you state,
return the whole recordset and possibly 'store it' as a dataset. You can
then create the required views as needed. If there are any DBA's reading
PLEASE don't lecture on the bad practice of returning more records than
required...IT'S NOT MY IDEA! :-) :-)
I know you asked if there was a way to do this 'automatically', but I don't
know of one, other than the built-in methods within the asp datagrid. Sorry
if this is not helpful.
Good luck.
Phil
"John" <John@.nospam.infovis.co.uk> wrote in message
news:eWQ3DddpFHA.2976@.TK2MSFTNGP12.phx.gbl...
> Hi
> I am developing a vb2005/sql server 2005 winform app which involves
> displaying records in a list, one page at a time. The total number of
> records is large. I am wondering if there is a way either in vb/ado or sql
> server that automatically pages a certain number of records at a time and
> when user scrolls down (or up) pages the next set of records? I guess I
> can possibly program it manually but it may be complicated specially when
> the records in the next/previous set are different due to the different
> sort orders. Ideally I am looking for giving a select statement to include
> all records as data source and then expect system to handle any pagination
> and bringing only one page of record from server at any one time.
> Thanks
> Regards
>|||Check out:
http://www.aspfaq.com/show.asp?id=2120
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"Phil G." <Phil@.nospam.com> wrote in message
news:de9hlf$9fl$1@.nwrdmz03.dmz.ncs.ea.ibs-infra.bt.com...
Hi John,
I am not a DBA or even half-experienced db developer but, I guess you could
consider achieving your goal by using views. You will still, as you state,
return the whole recordset and possibly 'store it' as a dataset. You can
then create the required views as needed. If there are any DBA's reading
PLEASE don't lecture on the bad practice of returning more records than
required...IT'S NOT MY IDEA! :-) :-)
I know you asked if there was a way to do this 'automatically', but I don't
know of one, other than the built-in methods within the asp datagrid. Sorry
if this is not helpful.
Good luck.
Phil
"John" <John@.nospam.infovis.co.uk> wrote in message
news:eWQ3DddpFHA.2976@.TK2MSFTNGP12.phx.gbl...
> Hi
> I am developing a vb2005/sql server 2005 winform app which involves
> displaying records in a list, one page at a time. The total number of
> records is large. I am wondering if there is a way either in vb/ado or sql
> server that automatically pages a certain number of records at a time and
> when user scrolls down (or up) pages the next set of records? I guess I
> can possibly program it manually but it may be complicated specially when
> the records in the next/previous set are different due to the different
> sort orders. Ideally I am looking for giving a select statement to include
> all records as data source and then expect system to handle any pagination
> and bringing only one page of record from server at any one time.
> Thanks
> Regards
>

Pagination of data

Hi
I am developing a vb2005/sql server 2005 winform app which involves
displaying records in a list, one page at a time. The total number of
records is large. I am wondering if there is a way either in vb/ado or sql
server that automatically pages a certain number of records at a time and
when user scrolls down (or up) pages the next set of records? I guess I can
possibly program it manually but it may be complicated specially when the
records in the next/previous set are different due to the different sort
orders. Ideally I am looking for giving a select statement to include all
records as data source and then expect system to handle any pagination and
bringing only one page of record from server at any one time.
Thanks
Regards
Take a look at this article.
http://www.aspfaq.com/show.asp?id=2120
David Portas
SQL Server MVP
|||I am doing it for a winform app and asp may not be relevant but I will have
a look.
Thanks
Regards
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:IZidne6mqKZJLJreRVn-ow@.giganews.com...
> Take a look at this article.
> http://www.aspfaq.com/show.asp?id=2120
> --
> David Portas
> SQL Server MVP
> --
>
|||Of the various solutions given, most of them are not ASP-specific. Mostly
they use TSQL.
David Portas
SQL Server MVP
|||Hi John,
I am not a DBA or even half-experienced db developer but, I guess you could
consider achieving your goal by using views. You will still, as you state,
return the whole recordset and possibly 'store it' as a dataset. You can
then create the required views as needed. If there are any DBA's reading
PLEASE don't lecture on the bad practice of returning more records than
required...IT'S NOT MY IDEA! :-) :-)
I know you asked if there was a way to do this 'automatically', but I don't
know of one, other than the built-in methods within the asp datagrid. Sorry
if this is not helpful.
Good luck.
Phil
"John" <John@.nospam.infovis.co.uk> wrote in message
news:eWQ3DddpFHA.2976@.TK2MSFTNGP12.phx.gbl...
> Hi
> I am developing a vb2005/sql server 2005 winform app which involves
> displaying records in a list, one page at a time. The total number of
> records is large. I am wondering if there is a way either in vb/ado or sql
> server that automatically pages a certain number of records at a time and
> when user scrolls down (or up) pages the next set of records? I guess I
> can possibly program it manually but it may be complicated specially when
> the records in the next/previous set are different due to the different
> sort orders. Ideally I am looking for giving a select statement to include
> all records as data source and then expect system to handle any pagination
> and bringing only one page of record from server at any one time.
> Thanks
> Regards
>
|||Check out:
http://www.aspfaq.com/show.asp?id=2120
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
..
"Phil G." <Phil@.nospam.com> wrote in message
news:de9hlf$9fl$1@.nwrdmz03.dmz.ncs.ea.ibs-infra.bt.com...
Hi John,
I am not a DBA or even half-experienced db developer but, I guess you could
consider achieving your goal by using views. You will still, as you state,
return the whole recordset and possibly 'store it' as a dataset. You can
then create the required views as needed. If there are any DBA's reading
PLEASE don't lecture on the bad practice of returning more records than
required...IT'S NOT MY IDEA! :-) :-)
I know you asked if there was a way to do this 'automatically', but I don't
know of one, other than the built-in methods within the asp datagrid. Sorry
if this is not helpful.
Good luck.
Phil
"John" <John@.nospam.infovis.co.uk> wrote in message
news:eWQ3DddpFHA.2976@.TK2MSFTNGP12.phx.gbl...
> Hi
> I am developing a vb2005/sql server 2005 winform app which involves
> displaying records in a list, one page at a time. The total number of
> records is large. I am wondering if there is a way either in vb/ado or sql
> server that automatically pages a certain number of records at a time and
> when user scrolls down (or up) pages the next set of records? I guess I
> can possibly program it manually but it may be complicated specially when
> the records in the next/previous set are different due to the different
> sort orders. Ideally I am looking for giving a select statement to include
> all records as data source and then expect system to handle any pagination
> and bringing only one page of record from server at any one time.
> Thanks
> Regards
>

Wednesday, March 7, 2012

PageNumber in XML output??

Hello,
Is there a way to get information (page number) in the Page Footer to appear
as an Element in an XML export?
Thanks
David BowsI tried to set these properties but they seem not to render when they are in
the Page Footer. They work fine in the body, but I cannot use
Globals!PageNumber in the body of a report only the Page Footer. What I
want is to be able to save the report to PDF then export to XML and then
write a quick app that will go through the XML and build a table of
contents. Seems straight forward, but cannot get any XML from the Page
Footer to export.
Any help would be appreciated.
David
"TrussworksLeo" <Leo@.noemail.noemail> wrote in message
news:E1A812C9-C9A0-47A1-BC25-F01AC7CE7C08@.microsoft.com...
> If the advanced properties for the textbox item UnderData Output you can
> render as an Element... You can also name the element.
> Is this what your looking for?
> Leo
> "David Bows" wrote:
> > Hello,
> >
> > Is there a way to get information (page number) in the Page Footer to
appear
> > as an Element in an XML export?
> >
> > Thanks
> > David Bows
> >
> >
> >

PageNumber in the body

There is Globals.PageNumber variable that can be used only in page header
and footer.
How can I get page number in page body?
Ola A.Try creating an object in the header or footer (this can be invisible)
and assign the Globals.PageNumber value to it.
Then create an object on the page body, and reference the value of the
object in the header to provide its value.
This is similar to other workarounds, and I think it will work - should
be easy enough to try, anyway...
Ola A. wrote:
> There is Globals.PageNumber variable that can be used only in page
header
> and footer.
> How can I get page number in page body?
> Ola A.|||I tried but i get this error:
The hidden expression for the rectangle â'rectangle4â' refers to the report
item â'rectangle2â'. Report item expressions can only refer to other report
items within the same grouping scope or a containing grouping scope.
The rectangle2 is the one who is in the head and the rectangle4 is in the
body.
Ola A.
"Parker" wrote:
> Try creating an object in the header or footer (this can be invisible)
> and assign the Globals.PageNumber value to it.
> Then create an object on the page body, and reference the value of the
> object in the header to provide its value.
> This is similar to other workarounds, and I think it will work - should
> be easy enough to try, anyway...
>
> Ola A. wrote:
> > There is Globals.PageNumber variable that can be used only in page
> header
> > and footer.
> > How can I get page number in page body?
> >
> > Ola A.
>

pagenumber in body

Headers are more sophistacated these days. They require client information,
report execution date, page number, report spacific details like account
numbers.
It seems that I cannot put a pagenumber in the body, and I cannot have
dataset info in the header. If I wanted to do this ... could I ? I would
like a 'Power Header' with a mix of global variables like pagenumber mixed
with dataset fields... Can this be done?You this special case you could refer the dataSource on some reportItems
rather than the fields collection.
HTH; Jens Suessmeyer.
--
http://www.sqlserver2005.de
--
"d pak" <dipakb@.exchnage.ml.com> schrieb im Newsbeitrag
news:B80D6CE0-CCA3-481C-B1CA-580642863773@.microsoft.com...
> Headers are more sophistacated these days. They require client
> information,
> report execution date, page number, report spacific details like account
> numbers.
> It seems that I cannot put a pagenumber in the body, and I cannot have
> dataset info in the header. If I wanted to do this ... could I ? I would
> like a 'Power Header' with a mix of global variables like pagenumber mixed
> with dataset fields... Can this be done?

PAGEIOLATCH_SH wait time is 80%

Hi,
One of my sql job contain 80% read & 20% insert /updates.
This job also contains number of cursor operations. It use to run for 2Hrs
or less and now it runs for 8hrs.
When I check the process wait time. The PAGEIOLATCH_SH is 80% and
PAGEIOLATCH_EX is 10%.
I am not sure what is causing this high PAGEIOLATCH_SH wait time?
Any thoughts on what could be the issue here?
Regards,
Sarav.
Maybe http://sqldev.net/articles/WaitTypes.htm can help you.
GertD@.SQLDev.Net
Please reply only to the newsgroups.
This posting is provided "AS IS" with no warranties, and confers no rights.
You assume all risk for your use.
Copyright SQLDev.Net 1991-2005 All rights reserved.
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>
|||I'm going to venture a guess here, but I'd bet that your use of cursors is
at the root of your problem. Resource utilization by cursors has a way of
growing in non-linear ways. Some of the gurus here might be able to offer
set based solutions if you can succinctly describe the process.
Bob Castleman
SuccessWare Software
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>
|||Hi Sarav
The process wait time % split seems normal to me given your estimate of 80%
read / 20% insert / updates. The PAGEIOLATCH_SH represents the shared locks
taken under the 80% read operations and thePAGEIOLATCH_EX represents the
exclusive locks required to perform the insert / updates.
Perhaps the post re cursors has some merit. I'd also be looking at general
resource availability (memory / cache stats / lock blocks) and whether the
sql job is being blocked by other processes.
HTH
Regards,
Greg Linwood
SQL Server MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>
|||That means you have a heck of a lot of disk access going on. I would ensure
the operations are well tuned first.
Andrew J. Kelly SQL MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>

PAGEIOLATCH_SH wait time is 80%

Hi,
One of my sql job contain 80% read & 20% insert /updates.
This job also contains number of cursor operations. It use to run for 2Hrs
or less and now it runs for 8hrs.
When I check the process wait time. The PAGEIOLATCH_SH is 80% and
PAGEIOLATCH_EX is 10%.
I am not sure what is causing this high PAGEIOLATCH_SH wait time?
Any thoughts on what could be the issue here?
Regards,
Sarav.Maybe http://sqldev.net/articles/WaitTypes.htm can help you.
GertD@.SQLDev.Net
Please reply only to the newsgroups.
This posting is provided "AS IS" with no warranties, and confers no rights.
You assume all risk for your use.
Copyright SQLDev.Net 1991-2005 All rights reserved.
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>|||I'm going to venture a guess here, but I'd bet that your use of cursors is
at the root of your problem. Resource utilization by cursors has a way of
growing in non-linear ways. Some of the gurus here might be able to offer
set based solutions if you can succinctly describe the process.
Bob Castleman
SuccessWare Software
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>|||Hi Sarav
The process wait time % split seems normal to me given your estimate of 80%
read / 20% insert / updates. The PAGEIOLATCH_SH represents the shared locks
taken under the 80% read operations and thePAGEIOLATCH_EX represents the
exclusive locks required to perform the insert / updates.
Perhaps the post re cursors has some merit. I'd also be looking at general
resource availability (memory / cache stats / lock blocks) and whether the
sql job is being blocked by other processes.
HTH
Regards,
Greg Linwood
SQL Server MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>|||That means you have a heck of a lot of disk access going on. I would ensure
the operations are well tuned first.
Andrew J. Kelly SQL MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>

PAGEIOLATCH_SH wait time is 80%

Hi,
One of my sql job contain 80% read & 20% insert /updates.
This job also contains number of cursor operations. It use to run for 2Hrs
or less and now it runs for 8hrs.
When I check the process wait time. The PAGEIOLATCH_SH is 80% and
PAGEIOLATCH_EX is 10%.
I am not sure what is causing this high PAGEIOLATCH_SH wait time?
Any thoughts on what could be the issue here?
Regards,
Sarav.Maybe http://sqldev.net/articles/WaitTypes.htm can help you.
GertD@.SQLDev.Net
Please reply only to the newsgroups.
This posting is provided "AS IS" with no warranties, and confers no rights.
You assume all risk for your use.
Copyright © SQLDev.Net 1991-2005 All rights reserved.
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>|||I'm going to venture a guess here, but I'd bet that your use of cursors is
at the root of your problem. Resource utilization by cursors has a way of
growing in non-linear ways. Some of the gurus here might be able to offer
set based solutions if you can succinctly describe the process.
Bob Castleman
SuccessWare Software
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>|||Hi Sarav
The process wait time % split seems normal to me given your estimate of 80%
read / 20% insert / updates. The PAGEIOLATCH_SH represents the shared locks
taken under the 80% read operations and thePAGEIOLATCH_EX represents the
exclusive locks required to perform the insert / updates.
Perhaps the post re cursors has some merit. I'd also be looking at general
resource availability (memory / cache stats / lock blocks) and whether the
sql job is being blocked by other processes.
HTH
Regards,
Greg Linwood
SQL Server MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>|||That means you have a heck of a lot of disk access going on. I would ensure
the operations are well tuned first.
--
Andrew J. Kelly SQL MVP
"sarav" <orumurai@.yahoo.cmom> wrote in message
news:ugiqcPlAFHA.2584@.TK2MSFTNGP09.phx.gbl...
> Hi,
>
> One of my sql job contain 80% read & 20% insert /updates.
> This job also contains number of cursor operations. It use to run for 2Hrs
> or less and now it runs for 8hrs.
>
> When I check the process wait time. The PAGEIOLATCH_SH is 80% and
> PAGEIOLATCH_EX is 10%.
>
> I am not sure what is causing this high PAGEIOLATCH_SH wait time?
>
> Any thoughts on what could be the issue here?
>
> Regards,
> Sarav.
>

Saturday, February 25, 2012

Page Splits \ sec

What is a "Bad" Number for Page Splits \sec.
I know ideally, that we would want Zero.
but when she I start being concerned ?
Thanks as usual
Greg Jackson
PDX, ORGreg
I can't find a figure anywhere for what is a bad number. However a number co
uld be missleading anyway. On a large database with lots of tables, you woul
d get a lot more page splits than on a small database with a few tables.
As it just reports how many page splits, they may be mostly on one table or
they could be spread evenly over your database. I think the number of page s
plits just on it's own is not very usefull.
Do you regularly reindex all or some of the tables on your database? If you
are worried about the effects that page splits are having on your database,
usee dbcc showcontig to see how fragmented they are. If some tebles become f
ragmented much quicker than
others ( all tables except static tables will become fragmented to a degree
over time), it might be worth increasing the fill factor. This will increase
the table size but will slow down the page splits and the fragmentation.
Hope this helps
John|||Yeah I totally am with you.
Here is the deal.
I am our Senior DB Architect\Engineer I am NOT the DBA and as such, I dont
have direct access to prod.
However, our prod DB is totally sucking wind. (Sprocs taking in excess of
100,000 ms)
WE NEVER EVER EVER defrag as our DBA insists that it is impossible without
downtime. I have pointed him to a million articles on INDEXDEFRAG, etc. He
says the articles are "Wrong". He SWEARS That he must turn off logging to
run indexdefrag cuz "The log fills up too fast". (NUTS)
ALL Of our tables are fragged bigger then crud. They ALL have 100% fill
factor in a highly transactional system (averaging 44 trnxns \sec).
Most tables are currently 87% fragged. Not sure why, but that seems to be
where fragmentation bottoms out.
I am writing scripts to drop indexes and PKEYS in some cases so we can
recreate them with fill factor settings (Anywhere from 70% to 100% depending
on the table).
Our DBA says I'm nuts and that fill factor will REDUCE performance.
He tells his boss that we have ZERO page splits in production.
I say..."How can that be possible ? Think about what you're saying" We have
100% fill factors, we have 3.8 million transactions per day, Our Database is
growing at a rate of 5GB per month, but we have ZERO Page splits ?
I run perf mon against the server and I am seeing page splits all over the
freaking place.
I need to send numbers to his boss and say "Look, this guy is not right" we
have to implement my scripts or we are going to continue to tank as data and
volume grows.
Here is data from some random samples I obtained.
Date
Average Page Splits\sec
1/27/2004
1.15282921
2/1/2004
0.606016587
2/5/2004
1.344524812
2/6/2004
1.589833663
2/20/2004
0.9
2/21/2004
0.59
2/22/2004
0.78
2/27/2004
1.82
so my question is....
based on these numbers, are these figures high enough to prove my point ?
1.5 page splits per second does not sound like a huge number to me. So It
appears to NOT prove what I "KNOW" to be happening in production.
wouldnt I expect to see thousands and thousands of splits per second if the
problem were as prolific as I know it to be ?
GAJ|||Those figures are well within acceptable limits and will not cause the type
of slowness you are indicating. While you DBA's claims in general are not
true fragmentation in an OLTP system may not be as much an issue as most
people think. If these are random I/O (not range queries) it takes as much
time and effort to read the row from the beginning of the file as the end
etc. There are certainly things the DBA can do to reduce fragmentation it
sounds like your schema / queries just need to be optimized. I can take the
best designed db on the best hardware available and write a poor query that
will bring it to it's knees. If you have a sp that is taking a long time you
should profile it and see what it is doing. The you can see where the
issues lay. Your dba should be doing this without anyone screaming at
him<g>.
Andrew J. Kelly SQL MVP
"Jaxon" <GregoryAJackson@.hotmail.com> wrote in message
news:%23A%23wrcDCEHA.3928@.TK2MSFTNGP11.phx.gbl...
> Yeah I totally am with you.
> Here is the deal.
> I am our Senior DB Architect\Engineer I am NOT the DBA and as such, I dont
> have direct access to prod.
> However, our prod DB is totally sucking wind. (Sprocs taking in excess of
> 100,000 ms)
> WE NEVER EVER EVER defrag as our DBA insists that it is impossible without
> downtime. I have pointed him to a million articles on INDEXDEFRAG, etc. He
> says the articles are "Wrong". He SWEARS That he must turn off logging to
> run indexdefrag cuz "The log fills up too fast". (NUTS)
> ALL Of our tables are fragged bigger then crud. They ALL have 100% fill
> factor in a highly transactional system (averaging 44 trnxns \sec).
> Most tables are currently 87% fragged. Not sure why, but that seems to be
> where fragmentation bottoms out.
> I am writing scripts to drop indexes and PKEYS in some cases so we can
> recreate them with fill factor settings (Anywhere from 70% to 100%
depending
> on the table).
> Our DBA says I'm nuts and that fill factor will REDUCE performance.
> He tells his boss that we have ZERO page splits in production.
> I say..."How can that be possible ? Think about what you're saying" We
have
> 100% fill factors, we have 3.8 million transactions per day, Our Database
is
> growing at a rate of 5GB per month, but we have ZERO Page splits ?
> I run perf mon against the server and I am seeing page splits all over the
> freaking place.
> I need to send numbers to his boss and say "Look, this guy is not right"
we
> have to implement my scripts or we are going to continue to tank as data
and
> volume grows.
> Here is data from some random samples I obtained.
> Date
> Average Page Splits\sec
> 1/27/2004
> 1.15282921
> 2/1/2004
> 0.606016587
> 2/5/2004
> 1.344524812
> 2/6/2004
> 1.589833663
> 2/20/2004
> 0.9
> 2/21/2004
> 0.59
> 2/22/2004
> 0.78
> 2/27/2004
> 1.82
>
> so my question is....
> based on these numbers, are these figures high enough to prove my point ?
> 1.5 page splits per second does not sound like a huge number to me. So It
> appears to NOT prove what I "KNOW" to be happening in production.
> wouldnt I expect to see thousands and thousands of splits per second if
the
> problem were as prolific as I know it to be ?
>
> GAJ
>|||Hey Andrew,
I agree with you on one thing:
1. We have sprocs that definately suck rocks....I will be rewriting many of
them soon. But this is a very large task (some will take upwards of 2 weeks
to rewrite they are nasty)
BUT see below:
1. High Frag levels require SQL Server to read more pages than well
maintained DBs so IO is excessive for reads. Is that not a true statement
(Whether sequential or not) ?
2. Also establishing fill factor settings will reduce Page Splits, hence
reducing IO Load. Is that not a true statement ?
I definately am aware that the existince of HEAPS and huge frag levels are
not our ONLY problem. I just see that issue as "Low Hanging Fruit" (easy and
quick to fix).
The next topic I have to tackle is that MOST of our FKeys are not indexes.
According to our DBA, that is NOT a problem either Even when we look at the
Query plan and see massive Table Scans (we're talking tables with Millions
of records). Again, he says we have "No Table Scans In Production" and
"Indexing the Fkeys will slow us down and I will not allow them".
Here are the perfmon stats for Full Scans:
SQLServer:Access Methods\Full Scans/sec
(Average over the past few weeks = 21.38)
My plan to improve the performance:
1. Remove Heaps from production by ensuring clustered indexes (placed
wisely) on our Important tables
2. Establish Fill Factor Settings for WRITE Intensive tables
3. Perform periodic defrag maintenance
4. Add Indexes to FKeys that are used in Joins
5. Rewrite problem sprocs (remove temp tables, cursors, etc)
I know you've been around a while and I respect your opinion so light me up
!
Thanks in advance
GAJ|||Hi,
Here are just my 2 cents worth ;-)

> 1. High Frag levels require SQL Server to read more pages than well
> maintained DBs so IO is excessive for reads. Is that not a true statement
> (Whether sequential or not) ?
High Frag levels doesn't mean that SQL Server will read more pages. However,
when doing table scans SQL Server will be bouncing all over the disk to grab
the right pages and therefore resulting in more IO. Therefore, I consider
your statement to be partially true.

> 2. Also establishing fill factor settings will reduce Page Splits, hence
> reducing IO Load. Is that not a true statement ?
Reducing the fill factor will indeed reduce page splits and therefore
increase write performance. However, because of the pages that are now
filled with air, SQL Server will need more logical IO when reading pages.
However, on my databases I've seen lots of examples where lower fill factors
increased the performance significantly without any effect on read
performance.
With regard to your plan, I would add clustered and non-clsutered indexes as
soon as possible. Therefore, making point 4 on your list priority 2.
Good luck with your DBA ;-)
HTH
Karl Gram
"Jaxon" <GregoryAJackson@.hotmail.com> wrote in message
news:uuEfqxDCEHA.1812@.TK2MSFTNGP12.phx.gbl...
> Hey Andrew,
> I agree with you on one thing:
> 1. We have sprocs that definately suck rocks....I will be rewriting many
of
> them soon. But this is a very large task (some will take upwards of 2
weeks
> to rewrite they are nasty)
> BUT see below:
> 1. High Frag levels require SQL Server to read more pages than well
> maintained DBs so IO is excessive for reads. Is that not a true statement
> (Whether sequential or not) ?
> 2. Also establishing fill factor settings will reduce Page Splits, hence
> reducing IO Load. Is that not a true statement ?
> I definately am aware that the existince of HEAPS and huge frag levels are
> not our ONLY problem. I just see that issue as "Low Hanging Fruit" (easy
and
> quick to fix).
> The next topic I have to tackle is that MOST of our FKeys are not indexes.
> According to our DBA, that is NOT a problem either Even when we look at
the
> Query plan and see massive Table Scans (we're talking tables with Millions
> of records). Again, he says we have "No Table Scans In Production" and
> "Indexing the Fkeys will slow us down and I will not allow them".
>
> Here are the perfmon stats for Full Scans:
> SQLServer:Access Methods\Full Scans/sec
> (Average over the past few weeks = 21.38)
>
> My plan to improve the performance:
> 1. Remove Heaps from production by ensuring clustered indexes (placed
> wisely) on our Important tables
> 2. Establish Fill Factor Settings for WRITE Intensive tables
> 3. Perform periodic defrag maintenance
> 4. Add Indexes to FKeys that are used in Joins
> 5. Rewrite problem sprocs (remove temp tables, cursors, etc)
>
> I know you've been around a while and I respect your opinion so light me
up
> !
>
> Thanks in advance
>
> GAJ
>|||nicely said Karl.
I agree with your semantics.
#1, 2 and 3 are all being done in one swoop. so really they are all the same
step as far as script building and execution are concerned.
The FKey Index thing is definately killing us. Any argument to the contrary
is really ludicrous. We have about half a dozen very large sprocs that are
called thousands and thousands of times a day that access these tables.
You can see in the Query plan that they are doing scans. I fix this problem
and ALL the sprocs improve dramatically.
I'll keep swingin....
thanks
GAJ|||Jaxon,
Your suggested recommendation sounds wise to me. Just watch out for log size
and available database size when you defrag.
It seems that you are in a "fight" with the DBA, and this will end up in a
political or even prestige issue. Not something I envy, because you will
never win. If you are right (and I bet a beer that you are), the DBA will
not like you as you will make him/her look foolish/incompetent. And if you
are wrong, well... you are wrong. ;-).
A first step can be to implement your suggestions on a test database and see
how much it improves performance. How you then will let the DBA know about
your results, I don't know - you have to think about that for a while.
Ideally, the DBA would think that he/she came up with the solution (possibly
with some assistance from you).
Another note, btw. Having indexes on FK's will also help when you insert and
update the referenced table. SQL Server has to do an "internal" join to
verify the reference constraint.
If you know how SQL Server work, these things aren't really that difficult,
and there is no "magic" involved. Perhaps there's only a communication
problem with the DBA?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
"Jaxon" <GregoryAJackson@.hotmail.com> wrote in message
news:uuEfqxDCEHA.1812@.TK2MSFTNGP12.phx.gbl...
> Hey Andrew,
> I agree with you on one thing:
> 1. We have sprocs that definately suck rocks....I will be rewriting many
of
> them soon. But this is a very large task (some will take upwards of 2
weeks
> to rewrite they are nasty)
> BUT see below:
> 1. High Frag levels require SQL Server to read more pages than well
> maintained DBs so IO is excessive for reads. Is that not a true statement
> (Whether sequential or not) ?
> 2. Also establishing fill factor settings will reduce Page Splits, hence
> reducing IO Load. Is that not a true statement ?
> I definately am aware that the existince of HEAPS and huge frag levels are
> not our ONLY problem. I just see that issue as "Low Hanging Fruit" (easy
and
> quick to fix).
> The next topic I have to tackle is that MOST of our FKeys are not indexes.
> According to our DBA, that is NOT a problem either Even when we look at
the
> Query plan and see massive Table Scans (we're talking tables with Millions
> of records). Again, he says we have "No Table Scans In Production" and
> "Indexing the Fkeys will slow us down and I will not allow them".
>
> Here are the perfmon stats for Full Scans:
> SQLServer:Access Methods\Full Scans/sec
> (Average over the past few weeks = 21.38)
>
> My plan to improve the performance:
> 1. Remove Heaps from production by ensuring clustered indexes (placed
> wisely) on our Important tables
> 2. Establish Fill Factor Settings for WRITE Intensive tables
> 3. Perform periodic defrag maintenance
> 4. Add Indexes to FKeys that are used in Joins
> 5. Rewrite problem sprocs (remove temp tables, cursors, etc)
>
> I know you've been around a while and I respect your opinion so light me
up
> !
>
> Thanks in advance
>
> GAJ
>|||yeah Tibor,
you are right.
He really "IS" a nice guy. Just politics.
He has been around for a long time and kinda walks on water around here.
Im the new guy on the block and definately am stirring up a bunch of crud
(Kinda like a rooster in his hen house).
My only goal is to improve performance. We are currently dropping anywhere
from 3% up to 30% of our requests as we cant complete them in time due to DB
Bottlenecks.
Your suggestions are wise. I am trying really hard to be positive with him,
but there comes a point where the rubber hits the road too. We are
increasing volume dramatically in production so if we dont get this fixed
fast, we are literally gonna be outta business.
Thanks again, always good to hear from you.
gaj|||Since I got in late I see you have some good answers but I just wanted to
make some of my comments clear.
Fragmentation does not mean more reads unless you are doing range type
queries. While it can mean more reads it's usually not at thelevel you
would expect when doing index seeks. Fill factors can affect the number of
pages read but should not be confused with fragmentation. Proper fill
factors can also avoid or minimize page splits but a few page splits are to
be expected. As Tibor points out an index on a FK can be a real benefit if
you have RI declared. If so and you don't have an appropriate index you can
get scans related to the RI ad not the actual WHERE clause. In general I
think your approach is solid and worth while. But since time is limited
(for most of us) you should prioritize the tasks by findingthe worst
performing queries that are used most often. That will give you the biggest
bang for the buck.
Andrew J. Kelly SQL MVP
"Jaxon" <GregoryAJackson@.hotmail.com> wrote in message
news:uuEfqxDCEHA.1812@.TK2MSFTNGP12.phx.gbl...
> Hey Andrew,
> I agree with you on one thing:
> 1. We have sprocs that definately suck rocks....I will be rewriting many
of
> them soon. But this is a very large task (some will take upwards of 2
weeks
> to rewrite they are nasty)
> BUT see below:
> 1. High Frag levels require SQL Server to read more pages than well
> maintained DBs so IO is excessive for reads. Is that not a true statement
> (Whether sequential or not) ?
> 2. Also establishing fill factor settings will reduce Page Splits, hence
> reducing IO Load. Is that not a true statement ?
> I definately am aware that the existince of HEAPS and huge frag levels are
> not our ONLY problem. I just see that issue as "Low Hanging Fruit" (easy
and
> quick to fix).
> The next topic I have to tackle is that MOST of our FKeys are not indexes.
> According to our DBA, that is NOT a problem either Even when we look at
the
> Query plan and see massive Table Scans (we're talking tables with Millions
> of records). Again, he says we have "No Table Scans In Production" and
> "Indexing the Fkeys will slow us down and I will not allow them".
>
> Here are the perfmon stats for Full Scans:
> SQLServer:Access Methods\Full Scans/sec
> (Average over the past few weeks = 21.38)
>
> My plan to improve the performance:
> 1. Remove Heaps from production by ensuring clustered indexes (placed
> wisely) on our Important tables
> 2. Establish Fill Factor Settings for WRITE Intensive tables
> 3. Perform periodic defrag maintenance
> 4. Add Indexes to FKeys that are used in Joins
> 5. Rewrite problem sprocs (remove temp tables, cursors, etc)
>
> I know you've been around a while and I respect your opinion so light me
up
> !
>
> Thanks in advance
>
> GAJ
>

Page Splits - What tables?

Hi everyone,
I've been looking at page splits today for the first time, after noticing
our work server's count was quite high. The number is increasing every
minute, so I thought some action needed taking.
I've done quite a bit of reading up on the subject this morning and
understand much of the theory, but I haven't found a way to track down which
of my tables are actually page splitting so regularly.
Does anyone know a method of finding this out?
Thanks in advance,
Lloyd
Use DBCC showcontig... Tables with a high % full and inserts will be
splitting... Pages with lower may have already split, or do not need
splitting... Rebuild the index to set the fill factor to allow space for
new inserts.
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Lloyd" <Lloyd@.discussions.microsoft.com> wrote in message
news:E3C26251-CFAA-40AD-86DF-44E9298F68D6@.microsoft.com...
> Hi everyone,
> I've been looking at page splits today for the first time, after noticing
> our work server's count was quite high. The number is increasing every
> minute, so I thought some action needed taking.
> I've done quite a bit of reading up on the subject this morning and
> understand much of the theory, but I haven't found a way to track down
> which
> of my tables are actually page splitting so regularly.
> Does anyone know a method of finding this out?
> Thanks in advance,
> Lloyd
|||Cheers Wayne
Lloyd
"Wayne Snyder" wrote:

> Use DBCC showcontig... Tables with a high % full and inserts will be
> splitting... Pages with lower may have already split, or do not need
> splitting... Rebuild the index to set the fill factor to allow space for
> new inserts.
> --
> Wayne Snyder, MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> www.mariner-usa.com
> (Please respond only to the newsgroups.)
> I support the Professional Association of SQL Server (PASS) and it's
> community of SQL Server professionals.
> www.sqlpass.org
> "Lloyd" <Lloyd@.discussions.microsoft.com> wrote in message
> news:E3C26251-CFAA-40AD-86DF-44E9298F68D6@.microsoft.com...
>
>
|||if a table has a clustered index on a "Monotonically Increasing" value
(identity) it will probaly not be a major culprit.
Most likely caused by tables where the clustered index is on some other
column (GUIDS for example are great fun).
If you run a showcontig as Wayne pointed out you'll see which tables are
highly fragmented (Scan Density below 80% is a sign of issues).
Fragmentation is greatly caused by Page Splits. So those are your culprits.
Note: If a table is highly fragmented, but it has less than 1,000 pages of
data, dont focus on it. Focus on the guys with thousands and thousands of
data pages.
You will see performance improve leaps and bounds when you get this fixed.
If you have questions, feel free to email me directly
cheers
Greg Jackson
PDX, Oregon

Page Splits - What tables?

Hi everyone,
I've been looking at page splits today for the first time, after noticing
our work server's count was quite high. The number is increasing every
minute, so I thought some action needed taking.
I've done quite a bit of reading up on the subject this morning and
understand much of the theory, but I haven't found a way to track down which
of my tables are actually page splitting so regularly.
Does anyone know a method of finding this out?
Thanks in advance,
LloydUse DBCC showcontig... Tables with a high % full and inserts will be
splitting... Pages with lower may have already split, or do not need
splitting... Rebuild the index to set the fill factor to allow space for
new inserts.
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Lloyd" <Lloyd@.discussions.microsoft.com> wrote in message
news:E3C26251-CFAA-40AD-86DF-44E9298F68D6@.microsoft.com...
> Hi everyone,
> I've been looking at page splits today for the first time, after noticing
> our work server's count was quite high. The number is increasing every
> minute, so I thought some action needed taking.
> I've done quite a bit of reading up on the subject this morning and
> understand much of the theory, but I haven't found a way to track down
> which
> of my tables are actually page splitting so regularly.
> Does anyone know a method of finding this out?
> Thanks in advance,
> Lloyd|||Cheers Wayne
Lloyd
"Wayne Snyder" wrote:

> Use DBCC showcontig... Tables with a high % full and inserts will be
> splitting... Pages with lower may have already split, or do not need
> splitting... Rebuild the index to set the fill factor to allow space for
> new inserts.
> --
> Wayne Snyder, MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> www.mariner-usa.com
> (Please respond only to the newsgroups.)
> I support the Professional Association of SQL Server (PASS) and it's
> community of SQL Server professionals.
> www.sqlpass.org
> "Lloyd" <Lloyd@.discussions.microsoft.com> wrote in message
> news:E3C26251-CFAA-40AD-86DF-44E9298F68D6@.microsoft.com...
>
>|||if a table has a clustered index on a "Monotonically Increasing" value
(identity) it will probaly not be a major culprit.
Most likely caused by tables where the clustered index is on some other
column (GUIDS for example are great fun).
If you run a showcontig as Wayne pointed out you'll see which tables are
highly fragmented (Scan Density below 80% is a sign of issues).
Fragmentation is greatly caused by Page Splits. So those are your culprits.
Note: If a table is highly fragmented, but it has less than 1,000 pages of
data, dont focus on it. Focus on the guys with thousands and thousands of
data pages.
You will see performance improve leaps and bounds when you get this fixed.
If you have questions, feel free to email me directly
cheers
Greg Jackson
PDX, Oregon