Showing posts with label guys. Show all posts
Showing posts with label guys. Show all posts

Wednesday, March 28, 2012

Parameter Optimization

Hi guys,
I have a rather complex query which accepts 2 date parameters. When the
dates are hard-coded the query runs in 2 seconds; however when a parameter
is passed with the same values the query runs in excess of 2 minutes.
I got around this by building the entire query into a varchar variable, and
then using the exec(strexpression).
Is there a problem in the way the SQL server optimizer handles parameterized
queries? And what are the alternate ways to get around this?
Thanks,
JustinMaybe post some ddl on this specifying what the datatypes are.
"Justin" wrote:

> Hi guys,
>
> I have a rather complex query which accepts 2 date parameters. When the
> dates are hard-coded the query runs in 2 seconds; however when a parameter
> is passed with the same values the query runs in excess of 2 minutes.
>
> I got around this by building the entire query into a varchar variable, an
d
> then using the exec(strexpression).
>
> Is there a problem in the way the SQL server optimizer handles parameteriz
ed
> queries? And what are the alternate ways to get around this?
>
> Thanks,
> Justin
>
>|||I noticed that if you have many records in tables, conversion from varchar
to nvarchar or back can cause behavior you talk about. maybe when you
hardcode dates u use different type than when you pass them as params?
peter

Wednesday, March 21, 2012

Parallelism in SSIS

Hi,

I would just like to confirm something with you guys...

Am I correct in saying that you dont need multiple connections to the same DB in a SSIS package in order to achieve parallel processing across multiple SQL tasks. In other words, I have 2 SQL tasks executing different stored procedures on the same DB that I want to run in parallel. They should be able to share one connection and still process in parallel, correct?

With that in mind, would the processing be faster if they each had their own connection?

Thanks in advance.

GBez wrote:

Hi,

I would just like to confirm something with you guys...

Am I correct in saying that you dont need multiple connections to the same DB in a SSIS package in order to achieve parallel processing across multiple SQL tasks. In other words, I have 2 SQL tasks executing different stored procedures on the same DB that I want to run in parallel. They should be able to share one connection and still process in parallel, correct?

They can share a connection manager but they still maintain seperate connections to the database. This is a good thing by the way.

GBez wrote:

With that in mind, would the processing be faster if they each had their own connection?

No. it wouldn't make any difference.

-Jamie

Parallel Processing of Partition

Guys,
I would like to thanks Elizabeth for a great document. Appreciate it.
Under " Maximize parallelism during processing", I see the following bench mark.


# of Processors

# of Partitions to be processed in parallel

4

2 – 4

8

4 – 8

16

6 – 16

In our scenario, we have a huge dataset which means processing about 20 partitions in parallel. The box we have is a beefy box DL 585 4dual core CPU with 32 GB memory and we are using the 64 bit architecture.
However when monitoring the trace I see the following:-

1 - Process

Processing of Fact 2001 Partition

8:05:43

10:11:04

25 - Execute SQL

SQL Statement

9:38:06

9:38:06

17 - Read Data

Reading Data from Fact 2001 Partition

9:38:06

10:06:20

16 - Write Data

Writing Data to Fact 2001 Partition

8:05:43

10:10:59

20 - Build Indexes

Build Indexes

10:11:00

10:11:03

28 - Build Aggregations and Indexes

Build Aggregations and Indexes

10:11:00

10:11:03

Based on the above the thread for processing the fact kicks off at 8:05am. However the actual sql to the underlying database does not get kicked off until 9:38am.

Does this mean the same thing that it is waiting for CPUs? However when I look at the perfMon stats on the Server side, the CPU usage is pretty vanilla and I do not understand how to correlate this.

Any suggestions is much appreciated.

Rgds

Hari

Running processing of 20 partitions in parallel on 4 processor box is bit of the overkill. The fact you have lots of data to process doesnt mean you should start processing many partitions in parallel.

As Perf Guide indicates, number of processors should determine number of parallel operations. If you try to start too many processing operations, you will get into behavior you observe. Say you told Analysis Server to process 20 partitions in parallel but there not enought resources. In this case operations running in parallel are going to start fighting for resoures and some of the become blocked. Analysis Server also starts many operations within sigle partition processing job in parallel. what you see is that Execute SQL and Read data started in parallel, but Write data didnt start till some data became avaliable.

You should also take a look at optimizing your SQL server database for the type of queries Analysis Server sends to speed up sending data to it.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Ahh..thanks Edward.
Regarding this "The fact you have lots of data to process doesnt mean you should start processing many partitions in parallel."

At this point, we are doing a full process on the database though it is partitioned.So by default, it first starts the threads for the cubes in parallel and then spawns new threads for measuregroups and finally partitions in parallels and this is done by default. Any ways I can control this.

We are using the biggest possible box that is available now.
Quadra-core CPUs do not come out until June of this year.
Like you say, we are also working with the DB2 team to optimize the SQL by building indexes on the partition columns.

Just

|||

Hi Edward,

I am the Windows Server SA who supports Hari's machine. We had a similar issue on a 32-bit Windows 2003 OLAP machine that was acting the same way when building cubes. Analysis Services was running out of memory but we weren't seeing any memory or processor spikes. I opened a case and the engineer explained this was due to the fact that there was a 2 GB limitation, regardless of how much RAM the server has. We tried adding the /3GB and /USERVA switches. Neither worked and the solution to get past this was to upgrade to 64-bit.

How much memory is allocated to an application in the 64-bit version of Windows 2003? Is there a way to tell how much memory it's utilizing?

In addition the OS is seeing all 32 GB of RAM, but I can't tell if it's utilizing all 32 GB. Are the /PAE, /USERVA, and/or /AWE switches relevant to Win2K3 64-bit? Do I need to add anything to the boot.ini to ensure applications are able to use all available RAM? Do you have any suggestions for optimizing performance in SQL and or Analysis Services?

Thanks - Gib

|||

On 64bit machine there is no worry about single process being able to access as much memory as your machine can give to it. No modifying flags or setting special options for have to access more memory.

The amount of memory your Analysis Server will use during processing is depenedent on the size of the objects you will be processing and fact whether you will be processing them in parallel.

I'd say if you moved to 64bit machine with 32Gb of Ram, you should be able to handle quite a few things in parallel, so just go ahead an run your processing and see how high memory utilization goes.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Thanks Edward. I'm going to shift my question to Hari's problem. Specifically in relation to your teply yesterday.

"Running processing of 20 partitions in parallel on 4 processor box is bit of the overkill. The fact you have lots of data to process doesnt mean you should start processing many partitions in parallel.

As Perf Guide indicates, number of processors should determine number of parallel operations. If you try to start too many processing operations, you will get into behavior you observe. Say you told Analysis Server to process 20 partitions in parallel but there not enought resources. In this case operations running in parallel are going to start fighting for resoures and some of the become blocked. Analysis Server also starts many operations within sigle partition processing job in parallel. what you see is that Execute SQL and Read data started in parallel, but Write data didnt start till some data became avaliable.

You should also take a look at optimizing your SQL server database for the type of queries Analysis Server sends to speed up sending data to it."

Hari added a chart that has a Processor to Parallel Partition correlation.


# of Processors

# of Partitions to be processed in parallel

4

2 – 4

8

4 – 8

16

6 – 16

We are running on an HP DL585 it has 4 Dual Core AMD Processors. That having been said can OLAP utilize them as 8 Processors or will it only see them as 4?

The other thing you had mentioned is 20 partitions is overkill. Does that mean this type of lag is to be expected? Is there any kind of tweaking that can be done either on the OS or in SQL to improve performance?

As it stands now, Hari isn't seeing any CPU or Memory spikes when the delay is occurring. If operations are blocked and fighting for resources will we see 100 % Memory and CPU utilization?

Thanks - Gib

|||

Yes, you should count a single core as a processor for the purpose of the chart you mention. DL 585 is great macine and you should count as you have 8 processors.
Therefore the per guide recommends you process 4 to 8 partitions in parallel.

The bottleneck might not be CPU or memory. It is also your relational database. You should take a look at your relational database and see how hard it is working to deliver data to Analysis Server. You might want to run database tuninig advisor to build more indexes supporting queries sent to it.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Edward, Do you know if tuning the relational database will enable Hari to process 20 partitions at the same time? Do you know of anyone who is doing this on a Quad Dual Core server?

According to the chart it doesn't look like this possible, even with 16 processors. I'm wondering if there is any kind of tweaking that can be done so that it's possible.

Thanks - Gib

|||

That is not going to help. Number of partitions you process in parallel depends on how many operations in parallel your hardware can perform. If you tune your system, optimize your relational database, and you still not satisfied with processing performance, you might start looking at getting newer machines with higher number of processors.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Tuesday, March 20, 2012

Parallel activations with GET CONV GROUP

Hey guys. I'll try to keep this as short as possible.

Was testing out some thresholds with a simple service broker application and ran into something interesting that I'm trying to find a workaround to. I setup a simple service on a single instance, and am transmitting messages to/from the same instance. I assigned a procedure to be activated on the queue, with a max of 20 parallel activations.

The procedure originally was reading messages off the queue by following a process with major steps outlined here:

    Start loop until no more messages to process Call get conv. group with 3000 timeout into variable x If group retrieved, receive top 1 from queue where conv_group = x
      If no group, exit
    Process message(s) received (simply insert some of the msg data into user table, then waitfor 1 second to simulate doing some work) for the given group Loop to get next group and messages

So, with this type of configuration, I sent off a few thousand messages to the queue and monitored the queue count, user table count, and activated tasks count. To my surprise, despite the fact that there were thousands of messages in the queue waiting to be processed, only a single task was processing data, pulling about 1 per second.

I tried over and over with different values in the waitfor, and no additional tasks got spun up until I had a waitfor of 6 seconds or more...then only about 3-4 would get spun up at any time in parallel.

Given the way activation occurs, I thought I'd try things without the call to get conv group, and instead simply use a receive without a where clause...so, I rewrote the activation procedure to skip the call to GET CONV GROUP, and simply perform a receive without any where clause. I also removed the waitfor delay and simply left it to process messages as fast as it could...

This time, I sent over the same number of messages and checked the same counts, this time noticing that within 20 seconds or so, all 20 parallel instances of the activation procedure were processing messages off the queue, and doing so very quickly. Same occured if I put the waitfor delay back in...

The problem is, that this type of coding doesn't allow for processing related messages, even though it seems to be much better at allowing parallel activations. I tried to rewrite again using 2 types of receive statements (first one without a where clause and next one with a where clause on the conv. group from the first receive after processing the message) to see if that would allow for better parallel activation, however that worked about the same as using the GET CONV GROUP call.

So, if I have an application that mostly does not make use of grouping and want to ensure optimal parallel activation when needed, however also handle the times when related messages arrive and need to be processed together, what can I do? Any ideas?

I have test code scripts that can reproduce this behavior if needed. Thanks up front,

Can you send your scripts to Rushi (dot) Desai (at) microsoft (dot) com so that I can take a look at what is going on? Activation should not start tasks more frequently than once every 5 seconds whether you use GET CONVERSATION GROUP or RECEIVE directly. Given that, starting 20 concurrent queue readers should take 100 or more seconds.|||

Hi Rushi...you're correct, it appears that the tasks are getting started about 1 every 5 seconds at best. Not sure how I got the original 20 seconds I mentioned above, but I can't seem to reproduce that no matter what I do...must have been a brain mix-up on my part, sorry.

I can still forward you the scripts, but I'm mostly curious as if there's anything that can be done to 'force' the same type of activation when using the GET CONV GROUP call, then a RECEIVE with a filter on the given group? If my queue is getting filled with about 50 messages a second, 99.9% of which will contain unique conversation_group values, and each message takes about 1/2 second to process, it seems that the best I can get for parallel activation is about 3, and that isn't coming close to handling staying in front of the arrival rate...

Thanks,

|||

You should not notice any difference in the behavior of activation if you do:

WAITFOR (GET CONVERSATION GROUP ...)
... something that takes less than a second or two to complete ...
RECEIVE ... WHERE conversation_group_id = ...
... process message for long time...

and

WAITFOR (RECEIVE ... )
... process message for long time...

50 msgs on unique conversation groups with about 1/2 second per message would perform as follows (roughly):

t = 0: Start first task
5 < t < 10: Consumed 10 messages, start second task
10 < t < 15: Consumed 30 messages; start third task
15 < t < 20: Consumed 50 messages

Unfortunately, in SQL Server 2005, the 5 second timeout cannot be adjusted by the user and therefore user cannot adjust the "responsiveness" of activation. There is a tradeoff to be made between how quickly activation should launch additional task and how much time it should let existing tasks take to drain the queue. In future releases it would be nice to allow the user to control responsiveness. I would recommend that you write to product feedback (http://lab.msdn.microsoft.com/productfeedback/) so that we can open an issue to resolve this in the next release.

Current workaround may be using multiple identical services (and queues) and partitioning your dialogs among those services. Each of its queues will be activated independently and you could get more number of tasks started. (We know at least one internal customer who is doing this).

Thanks,
Rushi

|||

Hey again Rushi.

Well, I notice a HUGE difference in activation between those 2 exact scenarios. Using a GET CONV GROUP followed by a RECEIVE with a where clause, I can't get more than a single activation. However, using just a RECEIVE without a where clause and not call to GET CONV GROUP, I get max activations, 1 new activation every 5 seconds or so.

I'm not so concerned about the 5 second activation threshold, as that would be plenty fast enough an activation under load, and once the tasks are activated, they remain until they exit (i.e. queue is empty).

Since I can't attach scripts, I'll post another message immediately following this one with the full-text of the scripts that can easily reproduce this different behavior. To use the scripts:

1) SCRIPT 1 - will create the sample db, broker objects, endpoint, queue, service, etc., including the activation procedure (cp_test1). Notice in the intial creation, the procedure will make use of method #1 from above (i.e. a call to GET CONV GROUP, followed by a RECEIVE with where clause), and simply pulls top 1 from the queue w/ group filter, then inserts into another logging table, then pauses for 1/2 second before continuing.

2) SCRIPT 2 - will simply send @.c number of messages (2000 on post) into the queue, pausing for @.delay (2/100ths of a second on post) time between each send onto the queue

Run SCRIPT 1, then setup another script to simply the count of activated tasks, then start SCRIPT 2. You should notice that despite the fact that messages are arriving much faster than being processed, only a single task is activated no matter how long you wait.

3) SCRIPT 3 - will alter the cp_test1 procedure to use method #2 from above (i.e. no call to GET CONV GROUP, but instead just a call to RECEIVE without where clause). Run this to modify the proc, then start the tests over after all activated tasks have either exited or been killed.

After running script 3, you'll notice that tasks get activated every 5 seconds up to max activations...

|||

--

SCRIPT 1

--

/*
use master
drop database sbTest
drop endpoint ep_ServiceBroker_tcp
*/
execute as login = 'sa'
go

create database sbTest
go

use sbTest
go


create master key encryption by password = 'jpaiweqf2q938hq2-3980nhf9piunfwpojf';
alter master key add encryption by service master key;
go

create endpoint ep_ServiceBroker_tcp
state = stopped
as tcp (listener_port = 4022, listener_ip = all)
for service_broker (authentication = windows, encryption = disabled,
message_forwarding = disabled);
go

alter database sbTest set enable_broker;
alter database sbTest set error_broker_conversations;
go

create message type mt_test1 validation = well_formed_xml;
go

create contract ct_test1 (mt_test1 sent by initiator);
go

create procedure dbo.cp_test1
as
declare @.conversation_group_id uniqueidentifier,
@.conversation_handle uniqueidentifier,
@.msg_type_name nvarchar(128)

declare @.messages table (status tinyint, priority int, queing_order bigint, conversation_group_id uniqueidentifier,
conversation_handle uniqueidentifier, message_sequence_number bigint, service_name nvarchar(512),
service_id int, service_contract_name nvarchar(256), service_contract_id int, message_type_name nvarchar(256),
message_type_id int, validation nchar(2), message_body varbinary(max))

-- Loop until we manually break
while 1 = 1 begin
-- Start a transaction to receive the appropriate group
begin transaction;

-- Get the next available conversation group...wait for a few seconds then loop...
waitfor (
get conversation group @.conversation_group_id from dbo.qu_test1),
timeout 3000;

-- If we didn't get anything, break since we're all done and will be re-awoken automatically if needed
if @.conversation_group_id is null begin
if @.@.trancount > 0
rollback transaction;

break;
end

-- Got a group, so process all the messages for said group...
while 1 = 1 begin
-- Receive the next message for the conversation group...notice the where clause in the
-- receive statement, which ensures we only get messages for this group...no need for a waitfor here,
-- as calling the get conversation group will lock that group (so noone else gets any messages for it), and
-- we wouldn't even get to this point if there wasn't a message for this group...
receive top (1) *
from dbo.qu_test1
into @.messages
where conversation_group_id = @.conversation_group_id;

if @.@.rowcount = 0 or @.@.error <> 0
-- If an error occured, there is probably no more messages, or something happened in the receive
break;

-- Get the information that we need to process the message
select @.conversation_handle = conversation_handle,
@.msg_type_name = message_type_name
from @.messages

-- If an error occured (on the other end), or the message is an end conversation message,
-- end the conversation from our end
if @.msg_type_name in('http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog',
'http://schemas.microsoft.com/SQL/ServiceBroker/Error') begin
end conversation @.conversation_handle;
end

-- Stick the message data into our logging table
begin try
insert dbo.zztemp_messages
select status, priority, queing_order, conversation_group_id, conversation_handle,
message_sequence_number, service_name, service_id, service_contract_name,
service_contract_id, message_type_name, message_type_id, validation, message_body, getdate()
from @.messages;

-- Simulate think time for doing some work for a half a second or so
waitfor delay '000:00:00.5'
end try
begin catch
select error_message() as err_msg, error_number() as err_number,
error_line() as err_line, error_procedure() as err_proc;

print 'Caught error ' + quotename(error_message());

rollback transaction;
break;
end catch

delete @.messages
end -- inner - while 1 = 1

-- All done processing this conversation group, commit now...
commit transaction
end -- while 1 = 1
go

if isnull(object_id('dbo.zztemp_messages'), 0) = 0
create table dbo.zztemp_messages (status tinyint, priority int, queing_order bigint, conversation_group_id uniqueidentifier,
conversation_handle uniqueidentifier, message_sequence_number bigint, service_name nvarchar(512),
service_id int, service_contract_name nvarchar(256), service_contract_id int, message_type_name nvarchar(256),
message_type_id int, validation nchar(2), message_body varbinary(max), dt datetime default getdate());
go

-- Create the queue for the source server
create queue dbo.qu_test1 with
status = on,
retention = off,
activation (
status = on,
procedure_name = dbo.cp_test1,
max_queue_readers = 20,
execute as 'dbo'
)
on [default];
go

-- Create the source service
create service sv_test1
on queue dbo.qu_test1
(ct_test1);
go

alter endpoint ep_ServiceBroker_tcp state = started
go

|||

--

SCRIPT 2

--

use sbTest
go

declare @.i int, @.c int, @.delay varchar(25)

select @.i = 0, -- Incremental count...
@.c = 2000, -- Set this to the number of messages to send to the queue
@.delay = '000:00:00.02' -- waitfor delay value to pause on each iteration before sending next message

while @.i <= @.c begin

-- Send a test message to the target service
declare @.msg xml, @.hConversation uniqueidentifier;
set @.msg = N'<message>Test Message to Target</message>';

begin try
begin transaction;

-- Start the conversation
begin dialog conversation @.hConversation
from service sv_test1
to service 'sv_test1'
on contract ct_test1
with encryption = off;

-- Send the message on the dialog
send on conversation @.hConversation
message type mt_test1
(@.msg);

-- Commit the send...
commit transaction;

-- Not going to code for receiving back, as the auto-activated stored procedure handles that
-- NOTE that it also handles ending the conversation that was begun here...
end try
begin catch
-- Show the error information
select error_number() as err_num, error_message() as err_msg,
error_line() as err_line, error_procedure() as err_proc;

if @.hConversation is not null
end conversation @.hConversation

if @.@.trancount > 0
rollback transaction;

end catch

if @.@.trancount > 0
commit transaction;

set @.i = @.i + 1
waitfor delay @.delay
end

|||

--

SCRIPT 3

--

use sbTest
go

alter procedure dbo.cp_test1
as
declare @.conversation_group_id uniqueidentifier,
@.conversation_handle uniqueidentifier,
@.msg_type_name nvarchar(128),
@.error int

declare @.messages table (status tinyint, priority int, queing_order bigint, conversation_group_id uniqueidentifier,
conversation_handle uniqueidentifier, message_sequence_number bigint, service_name nvarchar(512),
service_id int, service_contract_name nvarchar(256), service_contract_id int, message_type_name nvarchar(256),
message_type_id int, validation nchar(2), message_body varbinary(max))

-- Got a group, so process all the messages for said group...
while 1 = 1 begin
begin transaction;

-- Notice not performing a call to GET CONVERSATION GROUP here...
waitfor (
receive top (1) *
from dbo.qu_test1
into @.messages), timeout 3000;

if @.@.error <> 0 begin
rollback tran;
break;
end

if (select count(*) from @.messages) = 0 begin
rollback tran;
break;
end

-- Get the information that we need to process the message
select @.conversation_handle = conversation_handle,
@.msg_type_name = message_type_name,
@.conversation_group_id = conversation_group_id
from @.messages

-- If an error occured (on the other end), or the message is an end conversation message,
-- end the conversation from our end
if @.msg_type_name in('http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog',
'http://schemas.microsoft.com/SQL/ServiceBroker/Error') begin
end conversation @.conversation_handle;
end

-- Stick the message data into our logging table
begin try
insert dbo.zztemp_messages
select status, priority, queing_order, conversation_group_id, conversation_handle,
message_sequence_number, service_name, service_id, service_contract_name,
service_contract_id, message_type_name, message_type_id, validation, message_body, getdate()
from @.messages;

-- Simulate think time for doing some work for a half a second or so
waitfor delay '000:00:00.5'
end try
begin catch
select error_message() as err_msg, error_number() as err_number,
error_line() as err_line, error_procedure() as err_proc;

print 'Caught error ' + quotename(error_message());

rollback transaction;
break;
end catch

-- All done processing this conversation group, commit now...
commit transaction;

delete @.messages
end -- while 1 = 1
go

|||

Yes, you can.

The problem is the inner loop that iterates over messages for a given group until RECEIVE returns empty result set. This kind of processing fools the activation machinery to believe that the activated proc is keeping up with the incomming rate of messages, because of the empty result set.

The solution is to receive ALL available messages for a group (no TOP clause), and then process the resultset, using a cursor:

use sbTest

go

alter procedure dbo.cp_test1

as

declare @.conversation_group_iduniqueidentifier,

@.conversation_handleuniqueidentifier,

@.msg_type_namenvarchar(128),

@.queuing_order INT;

declare @.messages table (status tinyint, priority int, queing_order bigint, conversation_group_id uniqueidentifier,

conversation_handle uniqueidentifier, message_sequence_number bigint, service_name nvarchar(512),

service_id int, service_contract_name nvarchar(256), service_contract_id int, message_type_name nvarchar(256),

message_type_id int, validation nchar(2), message_body varbinary(max))

-- declare a cursor for the @.messages

DECLARE crsMessages CURSOR

forward_only read_only

for

SELECT conversation_handle,

message_type_name,

queing_order

FROM @.messages;

-- Loop until we manually break

while 1 = 1 begin

-- Start a transaction to receive the appropriate group

begin transaction;

-- Get the next available conversation group...wait for a few seconds then loop...

waitfor (

get conversation group @.conversation_group_id from dbo.qu_test1),

timeout 3000;

-- If we didn't get anything, break since we're all done and will be re-awoken automatically if needed

if @.conversation_group_id is null begin

if @.@.trancount > 0

rollback transaction;

break;

end;

-- Got a group, so process all the messages for said group...

-- Receive the next message for the conversation group...notice the where clause in the

-- receive statement, which ensures we only get messages for this group...no need for a waitfor here,

-- as calling the get conversation group will lock that group (so noone else gets any messages for it), and

-- we wouldn't even get to this point if there wasn't a message for this group...

receive *

from dbo.qu_test1

into @.messages

where conversation_group_id = @.conversation_group_id;

if @.@.rowcount = 0 or @.@.error <> 0

-- If an error occured, there is probably no more messages, or something happened in the receive

break;

-- open the cursor over @.messages

OPEN crsMessages;

FETCH NEXT FROM crsMessages

INTO @.conversation_handle, @.msg_type_name, @.queuing_order;

WHILE (@.@.FETCH_STATUS = 0)

BEGIN

-- If an error occured (on the other end), or the message is an end conversation message,

-- end the conversation from our end

if @.msg_type_name in('http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog',

'http://schemas.microsoft.com/SQL/ServiceBroker/Error') begin

end conversation @.conversation_handle;

end

-- Stick the message data into our logging table

begin try

insert dbo.zztemp_messages

select status, priority, queing_order, conversation_group_id, conversation_handle,

message_sequence_number, service_name, service_id, service_contract_name,

service_contract_id, message_type_name, message_type_id, validation, message_body, getdate()

from @.messages

WHERE queing_order = @.queuing_order;

-- Simulate think time for doing some work for a half a second or so

waitfor delay '000:00:00.5'

end try

begin catch

select error_message() as err_msg, error_number() as err_number,

error_line() as err_line, error_procedure() as err_proc;

print 'Caught error ' + quotename(error_message());

rollback transaction;

break;

end catch

FETCH NEXT FROM crsMessages

INTO @.conversation_handle, @.msg_type_name, @.queuing_order;

END

CLOSE crsMessages;

delete @.messages

-- All done processing this conversation group, commit now...

commit transaction

end

DEALLOCATE crsMessages

go

HTH,
~ Remus

|||Yep, that makes sense...thanks guys!

Paging, Soring, Filtering Already-written SPs Result

Hi guys,
we developed a large-scale web-application which uses sqlserver 2005,
and we created all of SPs for geting list of particular records or
geting a record info.
i.e Products_GetList | Products_GetInfo & etc.
now we need to implement paging,sorting or filtering on these SPs
results...
& we couldn't do this at webapplication level because of huge number of
records... also it's not possible to change all of these SPs(we have
more than 300 SPs already)
in other words i'm searching for a way to create a generic method(SP or
UDF) which do the paging & other operations on result of all SPs if
required.
i.e :
MyApp --> DAL --> Call Specific SP with Specific Sorting --> DB -->
[Generic SP (do sorting)] --> Specific SP
ThanksHi
http://databases.aspfaq.com/database/how-do-i-page-through-a-recordset.html
"Khafancoder" <khafancoder@.gmail.com> wrote in message
news:1168506782.577051.145700@.k58g2000hse.googlegroups.com...
> Hi guys,
> we developed a large-scale web-application which uses sqlserver 2005,
> and we created all of SPs for geting list of particular records or
> geting a record info.
> i.e Products_GetList | Products_GetInfo & etc.
> now we need to implement paging,sorting or filtering on these SPs
> results...
> & we couldn't do this at webapplication level because of huge number of
> records... also it's not possible to change all of these SPs(we have
> more than 300 SPs already)
> in other words i'm searching for a way to create a generic method(SP or
> UDF) which do the paging & other operations on result of all SPs if
> required.
> i.e :
> MyApp --> DAL --> Call Specific SP with Specific Sorting --> DB -->
> [Generic SP (do sorting)] --> Specific SP
>
> Thanks
>|||IMO the solution is to redesign, rather than attempt a workaround. You might
think you could use wrapper procedure calls for each stored proc which
caches the results in a temp table and sorts them but the syntax for this is
"insert into #yourtable exec yourproc", so the temp table definition must
already exist and each wrapper proc will therefore be different. Whichever
way you look at it, this will require widespread changes. The sorted column
will need to be provided, and as you are using paging, the page size and
page number will be provided. So, new parameters will get added to each
stored proc call from the application and each stored proc needs editing.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .|||Thanks,
yes, as u said this wrapper method (i.e for sorting) should get
SortColumnName, Asc|Desc and SP name...
if we ignore the temp-table structure problem, other problems will
solved, for example if all of these SPs return a resultset in a
specific format then the wrapper method could insert them in a generic
temp-table and sort them and finally return them.
in this approach i can call "insert into #generictemptable exec SPName"
dynamically by using execute cmd, but i think it will affect on
application performance...
how about CLR Integration ? could we use it to write the wrapper ?
Paul Ibison wrote:
> IMO the solution is to redesign, rather than attempt a workaround. You might
> think you could use wrapper procedure calls for each stored proc which
> caches the results in a temp table and sorts them but the syntax for this is
> "insert into #yourtable exec yourproc", so the temp table definition must
> already exist and each wrapper proc will therefore be different. Whichever
> way you look at it, this will require widespread changes. The sorted column
> will need to be provided, and as you are using paging, the page size and
> page number will be provided. So, new parameters will get added to each
> stored proc call from the application and each stored proc needs editing.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com .|||My understanding of CLR integration is that it is great for iterative tasks
and pattern matching but this is pretty standard SQL so I don't see any
benefit there. In the code-behind you could use some paging code applied to
the recordset as mentioned in URI's link, but this performs badly compared
to a where clause run on the server. Essentially I'd still personally go
down the rewrite route unless your tables are incredibly generic.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .

Monday, March 12, 2012

Paging MSSQL database result with PHP..Can Any One help me plz..

hi guys..

i have got stucked into one programming problems which is called paging.i m using mssql as my database.i m using php for displaying data in to several pages but it is not supporting LIMIT functions, so can any one help me how to create php+mssql paging with accurate coding..? plz help me its urgent

bye....................One method is to write a stored procedure that takes a starting record ID and a numrecords value as input parameters.|||hello,

thanx for your reply.u have said about stored procedures...but i m confused and dnt know how to do it.so for your help i m sending you the coding that i have done and took the help of tutorials in MSSQL database using php as front end.

<?php
$dbusername="sa";
$dbpassword="automated";
$servername="203.82.207.175";
$link = mssql_connect("203.82.207.175","sa","automated");
mssql_select_db("B_Employer2000")or die("Unable to connect");
echo $BDYear;

$SQL="SELECT COUNT(*) AS trow FROM empBirthdayWish WHERE (BDYear = 2007)";
$result=mssql_query($SQL,$link) ;
$rs=mssql_fetch_array($result);
$totalrow=$rs["trow"];

$SQL="SELECT Sl_no, EmployeeCode, PCode, BDYear, WishLine, InsertDate FROM empBirthdayWish WHERE(BDYear = 2007)";
$result=mssql_query($SQL,$link) ;
$rs=mssql_fetch_array($result);
$sl_no=$rs["Sl_no"];
//$EmpCode=$rs["EmpCode"];
//$PCode=$rs['PCode'];
$BDYear=$rs['BDYear'];
$WishLine=$rs['WishLine'];
$InsertDate=$rs['InsertDate'];
$numrows = $row['numrows'];
$rowsPerPage = $row['10'];
//$TOP = ['10'];
$rowsPerPage = 10;
$pageNum=1;
for($j=10;$j<=$totalrow;$j=$j+10){
$pageNum = $pageNum+1;
$maxPage =$pageNum;
}
echo $pageNum."Total pages are available";
if(isset($_GET['page']))
{
$pageNum = $_GET['page'];
}
$offset = ($pageNum - 1) * $rowsPerPage;
echo "<table border='1' cell padding='1' cell spacing='1' bgcolor='#CBE643'>
<tr>
<th>Sl.No#</th>
<th>Birthday Year</th>
<th>Your Birthday Wishes</th>
<th>Posted Date</th>
</tr>";

$i=0;
//$SQL = " SELECT * from empBirthdayWish where BDYear='2007' "." LIMIT $offset, $rowsPerPage";
$query = " SELECT * from empBirthdayWish where BDYear='2007' "." TOP $offset, $rowsPerPage";
$result=mssql_query($SQL,$link) or die('Sorry Birthday Wish Search Query failed,Try after some time');
while($row = mssql_fetch_array($result))
{
$i++;
if ($i<=$rowsPerPage) {
echo "<tr>";
echo "<td>".$row['Sl_no'].'<br>';
//echo "<td>".$row['EmpCode'].'<br>';
//echo "<td>".$row['PCode'].'<br>';
echo "<td>".$row['BDYear'].'<br>';
echo "<td>".$row['WishLine'].'<br>';
echo "<td>".$row['InsertDate'].'<br>';
echo "</tr>";
}
else
if ($pageNum > 1)
{
$page = $pageNum - 1;
$prev = " <a href=\"$self?page=$page\">[Prev]</a> ";
$first = " <a href=\"$self?page=1\">[First Page]</a> ";
{
$next = " <a href=\"$self?page=$page\">[Next]</a> ";
$last = " <a href=\"$self?page=1\">[Last Page]</a> ";

}
}
else
{
{
$prev = ' [Prev] ';
$first = ' [First Page] ';
}
if ($pageNum < $maxPage)
{
$page = $pageNum - 1;
$next = " <a href=\"$self?page=$page\">[Next]</a> ";
$last = " <a href=\"$self?page=$maxPage\">[Last Page]</a> ";
}
else
{
$next = ' [Next] ';
$last = ' [Last Page] ';
}

}

//$result = mssql_query($query) or die('Sorry Birthday Wish Search Query failed,Try after some time');
//$maxPage = ceil($numrows/$rowsPerPage);
//$self = $_SERVER['PHP_SELF'];
}
echo "</table>";
echo $first . $prev . " Showing page <strong>1</strong> of <strong>$maxPage</strong> pages " . $next . $last;

?>

pls do suggest me or rewrite your coding in my coding to make it executable.

thanking you|||I'm a stored procedure person, not a PHP person. Sorry.|||well thanx for your reply..

if you are not a php person than i m sorry tht i have requested you without knowing that..as bcoz u r a stored procedure person ...guide me or help me in your own way coz i m confused and not familiar with stored procedures...u can guide me in coding at least..

thanking you|||http://www.google.com/search?q=paging+sql+server|||thanx for your assistance...hope u will assist more in future..

thanking you|||One "little" detail that caught my attention: You are using PHP connecting to a SQL Server as ... sa? You are well aware of the fact that you're having a severe security issue?|||thanx for reply..

how the security issues are coming using the sa?? plz let me know..

bye|||You don't know that sa has all rights in all databases on the instance of SQL Server? That in combination with the threat of SQL-injection, or security issues coming from a script language as PHP makes it a killer-combo for attackers. PHP scripts should really connect to a SQL Server using a login that has only the strictly neccessary access.|||You can simulate paging fairly easily using a single statement.

SELECT TOP $limit * FROM mytable WHERE table_id NOT IN (SELECT TOP $offset table_id FROM mytable ORDER BY test_name)

This requires that table_id be a unique integer value. Usually this is a PK with an IDENTITY(1,1) data type that will auto generate an incrementing value on INSERT.|||also sa has rights to enable xp_cmdshell. if that's enabled, and an attacker has access to it, your entire machine (not just sql server) will be owned by the attacker.|||...and possibly your network as well, depending upon the admin account for your server.