Showing posts with label basically. Show all posts
Showing posts with label basically. Show all posts

Wednesday, March 28, 2012

Parameter Passing and Oracle Issues

I am relatively new to reporting services so I've been figuring this out as I go, but basically I am in the process of converting BI reports from another reporting service to the MS reporting services. The reports themselves have been recreated without a hitch but the parameters gave me issues. Frankly I needed more control over the parameters then RS allowed in order to match the features in the previous BI solution.

To solve this I decided to build a vb .net or asp .net front end and get the parameters, do whatever needs to be done, then pass them to a report viewer. Great, except a lot of my parameters are "from query" and the whole point of using these new reports is to avoid having oracle installed on the client computers. The only way I know of now to get these "from query" parameters is to run the query client side, meaning oracle needs to be installed.

Basically I am looking for either a way to obtain the "from query" parameter options from a report and pull them out for use in a vb.net or asp.net application, or another idea of how to get around this problem. Basically I want to be able to handle the parameters completely within the app, then just generate the report.

Hi Jeremicus,

A few thoughts:

Can you explain what you were trying to do with the parameters in the report that was causing you problems? We might be able to come up with a work around without having to go to an external source.

If you are creating an asp.net page, you should be able to query oracle and display the results without installing anything on the clients. Whatever odbc / oracle drivers you're using will be installed on the web server themselves.

You're not going to want to run a report to use data in an application. You should run the query directly in your application. Then if you want to display a report, you can pass the data using parameters.

HTH,

Jessica

|||Well my main problems with the parameters were things to do with multi-valued parameters. Basically in the previous reporting setup they could easily choose to leave input boxes blank, or put in a value, or put in multiple ones (either typing them in as a comma delimited string or selecting from a list).

Since the multi value parameters in RS wont allow you to not select something, this removes the ability to leave them blank, and the workarounds to avoid this aren't overly user friendly in the setting that these reports would be used, so basically I wanted to handle all that on my own and give more flexibility to the users and just do all the parameter formatting or other less pretty stuff 'behind the scenes'.

I might just go with the asp page instead of the application, I was kind of hoping for the app over the asp page but it'll have to do if I can't figure a way around this.
sql

Friday, March 23, 2012

parameter and Oracle DB

I have a dataset that queries an Oracle DB. I am getting an error when I try
to set a parameter in the query. Basically I have the following:
WHERE x.system_num = s.system_num
and na.service_id = x.service_id
and na.location_num = sl.location_num
and sd.device_mux_id = dm2.device_mux_id
and (x.auth_date >= @.param_start)
ORDER BY m.mso_name, sl.system_name, system_type, d.device_address
The error I get is the following:
ORA-00936: missing expression
(System.Data.OracleClient)
When I remove the "and (x.auth_date >= @.param_start)" the error goes away.
The missing expression sounds like I have a comma at the end of the SELECT or
FROM line; but when the param line is removed the error goes away. So I
don't think that is it.
Could it be in how the parameter is set up? Just throwing out ideas here.
Thank you for any help offered.
Rob CuscadenFYI
After several HOURS of silly errors someone found the solution to this
problem. I wanted to post it here so that others can gain from my hours of
frustration. I will not say that it will work in ALL cases; but that it
works in mine and it is something for others to try if they are stuck.
Problem, using Reporting Services 2005 and accessing an Oracle database with
parameters.
In Reporting Services queries of a SQL DB, I have seen the @.param_startdate
used in the query and then the parameter is defined in the "Report
Parameters" option on the "Reports" top menu option.
So, for example
Select *
From sometable
Where sometable.datefield >= @.param_startdate
Then you go modify the Report Parameters option for the project.
If you try this with an ORACLE DB you will get several errors (ORA-00936:
missing expression is one of them).
SOLUTION: To solve this for an ORACLE DB, use the following:
SELECT *
FROM sometable
WHERE sometable.datefield = :param_startdate
-----
Now I am sure that this little ( : ) is documented somewhere in a document
out there somewhere. BUT for crying out loud... such a simple solution that
took 6 hours to find.
Please feel my frustration and I hope I prevent this from happening elsewhere.
Rob Cuscaden
"Rob" wrote:
> I have a dataset that queries an Oracle DB. I am getting an error when I try
> to set a parameter in the query. Basically I have the following:
> WHERE x.system_num = s.system_num
> and na.service_id = x.service_id
> and na.location_num = sl.location_num
> and sd.device_mux_id = dm2.device_mux_id
> and (x.auth_date >= @.param_start)
> ORDER BY m.mso_name, sl.system_name, system_type, d.device_address
> The error I get is the following:
> ORA-00936: missing expression
> (System.Data.OracleClient)
> When I remove the "and (x.auth_date >= @.param_start)" the error goes away.
> The missing expression sounds like I have a comma at the end of the SELECT or
> FROM line; but when the param line is removed the error goes away. So I
> don't think that is it.
> Could it be in how the parameter is set up? Just throwing out ideas here.
> Thank you for any help offered.
> Rob Cuscaden
>|||Rob,
I have been having the same problem and my queries look up an Oracle DB
also. I ended up getting so frustrated at mine, that our Oracle DB
Administrator told me to use a program called Toad for Oracle. Used that
which the SQL module in that program works very similar as the Microsoft
Query tool (Data, Import External Data,
New Database Query).
Anyway, I looked at the SQL Statement after building the Query (joining
tables, adding fields, adding criteria filters, etc). And I saw the same
Where table.field LIKE :whatever
Well, I tried using that in the Microsoft Query (SQL Statement) and I end up
getting some other error which I can't duplicate right now. I tried it again
and now it just completely bails me out of excel with the ever famous Debug,
Send Error Report, etc.
Anyway, wasn't sure if you ran into anything more you could share that might
help in my situation.
I also posted this issue somewhere else on here explaining more in detail
about how originally I created the query with one line of criteria where 3 of
my criteria having 5 "or" conditions. When I refresh or return data to Excel
it works fine.
Problem starts occurring if I ever have to go back in to the query to edit
and then try and refresh or return the data again. That's where I get the
ORA-00936 Missing expression error message.
I'm so stuck on this and really need to resolve somehow, because the data
that's returned from this query is used for charts, graphs, pivot tables,
that are all part of a giant VBA I wrote to automate dashboards for a bunch
of engineers.
Anwyay, any suggestions you or anyone might have on this would be greatly
appreciated.
"Rob" wrote:
> FYI
> After several HOURS of silly errors someone found the solution to this
> problem. I wanted to post it here so that others can gain from my hours of
> frustration. I will not say that it will work in ALL cases; but that it
> works in mine and it is something for others to try if they are stuck.
> Problem, using Reporting Services 2005 and accessing an Oracle database with
> parameters.
> In Reporting Services queries of a SQL DB, I have seen the @.param_startdate
> used in the query and then the parameter is defined in the "Report
> Parameters" option on the "Reports" top menu option.
> So, for example
> Select *
> From sometable
> Where sometable.datefield >= @.param_startdate
> Then you go modify the Report Parameters option for the project.
> If you try this with an ORACLE DB you will get several errors (ORA-00936:
> missing expression is one of them).
> SOLUTION: To solve this for an ORACLE DB, use the following:
> SELECT *
> FROM sometable
> WHERE sometable.datefield = :param_startdate
> -----
> Now I am sure that this little ( : ) is documented somewhere in a document
> out there somewhere. BUT for crying out loud... such a simple solution that
> took 6 hours to find.
> Please feel my frustration and I hope I prevent this from happening elsewhere.
> Rob Cuscaden
> "Rob" wrote:
> > I have a dataset that queries an Oracle DB. I am getting an error when I try
> > to set a parameter in the query. Basically I have the following:
> >
> > WHERE x.system_num = s.system_num
> > and na.service_id = x.service_id
> > and na.location_num = sl.location_num
> > and sd.device_mux_id = dm2.device_mux_id
> > and (x.auth_date >= @.param_start)
> >
> > ORDER BY m.mso_name, sl.system_name, system_type, d.device_address
> >
> > The error I get is the following:
> >
> > ORA-00936: missing expression
> > (System.Data.OracleClient)
> >
> > When I remove the "and (x.auth_date >= @.param_start)" the error goes away.
> > The missing expression sounds like I have a comma at the end of the SELECT or
> > FROM line; but when the param line is removed the error goes away. So I
> > don't think that is it.
> >
> > Could it be in how the parameter is set up? Just throwing out ideas here.
> >
> > Thank you for any help offered.
> > Rob Cuscaden
> >sql

Saturday, February 25, 2012

Page splits/ Dirty pages/ Checkpoint

We have few tables in our application. Basically they are used as temporary
tables. Each day they start with no records in it(empty) and as the day goes
they are filled with the data and at the end of the day they will be
truncated to get ready for the next day. We have a high transaction rate
about 4000/sec. When I noticed the page splits /sec counter it is showing
about 130-160 per second. This is driving the checkpoint to take longer time
.
How Can I reduce this high page splits.
Another question is, we have a char(15) column in those tables and that
column is indexed. It is an Id column but is not unique. Each record has an
unique number Id(generated by our app). But we need to seacrh on the CHAR Id
column so indexed on it. This index is creating/making lot of dirty pages.
This also is a contributing reason for the checkpoint to take longer. How ca
n
I make changes to the index so that it would not create/make many pages
dirty? I tried to change it to VARCHAR and there is not much difference.
The check point is taking about 10-15 seconds and it repeats every 60 second
s.
Your suggestion is greatly appreciated.
Thanks.
Thanks.Just give a try with the following info
1. Check the recovery interval option on the system.
2. make sure that the temporary tables have fixed size by using char
rather than
varchar therefore you can reduce the page spilts.
HTH
Regards
Rajesh Peddireddy.
"Srini" wrote:

> We have few tables in our application. Basically they are used as temporar
y
> tables. Each day they start with no records in it(empty) and as the day go
es
> they are filled with the data and at the end of the day they will be
> truncated to get ready for the next day. We have a high transaction rate
> about 4000/sec. When I noticed the page splits /sec counter it is showing
> about 130-160 per second. This is driving the checkpoint to take longer ti
me.
> How Can I reduce this high page splits.
> Another question is, we have a char(15) column in those tables and that
> column is indexed. It is an Id column but is not unique. Each record has a
n
> unique number Id(generated by our app). But we need to seacrh on the CHAR
Id
> column so indexed on it. This index is creating/making lot of dirty pages.
> This also is a contributing reason for the checkpoint to take longer. How
can
> I make changes to the index so that it would not create/make many pages
> dirty? I tried to change it to VARCHAR and there is not much difference.
> The check point is taking about 10-15 seconds and it repeats every 60 seco
nds.
> Your suggestion is greatly appreciated.
> Thanks.
> Thanks.|||It would really help to show the entire DDL for the table including the
indexes. It sounds like your disk subsystem isn't up to the task. If you
are going to have that many transactions you need a fast disk I/O subsystem,
especially for the transaction logs. Is the log file on it's own RAID 1 or
RAID 10 and is the data on a RAID 10?
Andrew J. Kelly SQL MVP
"Srini" <Srini@.discussions.microsoft.com> wrote in message
news:1F6FF490-C19A-4DFE-BDBC-3AD397CC9D5A@.microsoft.com...
> We have few tables in our application. Basically they are used as
> temporary
> tables. Each day they start with no records in it(empty) and as the day
> goes
> they are filled with the data and at the end of the day they will be
> truncated to get ready for the next day. We have a high transaction rate
> about 4000/sec. When I noticed the page splits /sec counter it is showing
> about 130-160 per second. This is driving the checkpoint to take longer
> time.
> How Can I reduce this high page splits.
> Another question is, we have a char(15) column in those tables and that
> column is indexed. It is an Id column but is not unique. Each record has
> an
> unique number Id(generated by our app). But we need to seacrh on the CHAR
> Id
> column so indexed on it. This index is creating/making lot of dirty pages.
> This also is a contributing reason for the checkpoint to take longer. How
> can
> I make changes to the index so that it would not create/make many pages
> dirty? I tried to change it to VARCHAR and there is not much difference.
> The check point is taking about 10-15 seconds and it repeats every 60
> seconds.
> Your suggestion is greatly appreciated.
> Thanks.
> Thanks.|||We have SAN disk system. Probably the HW is good enough, just trying to see
if I can rearrange some things on the database front to make some improvemen
t.
Coming to the DDL, the tables are not temporary tables the data is temporary
in the sense that the data is kept only for the current day and at the end o
f
the day they are truncated. Each table has about 20 columns. some are decima
l
fields some are datatime columns and others are integer and char type which
includes many char(1)'s and two/three columns char(15 to 20)). One of the
char(15) is indexed which is a kind of Id but is not unique there are no
relationships between these tables and other tables. No triggers no views an
d
anything as such. The data comes into to the system gets inserted to these
standalone tables using some stored procedures. And these tables are queried
using some other stored procedures. The major problem to me looks like is
because of the page splits that it is generating and the dirty pages that it
is generating(about 10000 dirty pages). Is there any thing that can be done
on the table or anything else to make things perform better?
Thanks in advance for your suggestion.
"Andrew J. Kelly" wrote:

> It would really help to show the entire DDL for the table including the
> indexes. It sounds like your disk subsystem isn't up to the task. If you
> are going to have that many transactions you need a fast disk I/O subsyste
m,
> especially for the transaction logs. Is the log file on it's own RAID 1 o
r
> RAID 10 and is the data on a RAID 10?
> --
> Andrew J. Kelly SQL MVP
>
> "Srini" <Srini@.discussions.microsoft.com> wrote in message
> news:1F6FF490-C19A-4DFE-BDBC-3AD397CC9D5A@.microsoft.com...
>
>|||Srini wrote:
> We have SAN disk system. Probably the HW is good enough, just trying
> to see if I can rearrange some things on the database front to make
> some improvement.
> Coming to the DDL, the tables are not temporary tables the data is
> temporary in the sense that the data is kept only for the current day
> and at the end of the day they are truncated. Each table has about 20
> columns. some are decimal fields some are datatime columns and others
> are integer and char type which includes many char(1)'s and two/three
> columns char(15 to 20)). One of the char(15) is indexed which is a
> kind of Id but is not unique there are no relationships between these
> tables and other tables. No triggers no views and anything as such.
> The data comes into to the system gets inserted to these standalone
> tables using some stored procedures. And these tables are queried
> using some other stored procedures. The major problem to me looks
> like is because of the page splits that it is generating and the
> dirty pages that it is generating(about 10000 dirty pages). Is there
> any thing that can be done on the table or anything else to make
> things perform better?
>
The reason it would help to see the DDL is because page splits are a
result of a clustered index and inserts that are not in clustered index
order. You can eliminate the page splitting by chaning the clustered
index to non-clustered or inserting the data in clustered index key
order (if that's possible).
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Just because it is a SAN does not mean it is sufficient or configured
properly for your application. I run across more issues SAN related simply
because people tend to ignore the configuration in thinking it can handle
what ever they need. The DDL was to see what we are dealing with and leave
nothing to imagination. It only takes a second to script the table and
indexes but it goes a long way towards letting us see what is actually
there. Not just what you may think is relevant. This is especially true for
the indexes.
Andrew J. Kelly SQL MVP
"Srini" <Srini@.discussions.microsoft.com> wrote in message
news:623BAACD-7754-4BF4-B9ED-5AA2EDC16F8B@.microsoft.com...
> We have SAN disk system. Probably the HW is good enough, just trying to
> see
> if I can rearrange some things on the database front to make some
> improvement.
> Coming to the DDL, the tables are not temporary tables the data is
> temporary
> in the sense that the data is kept only for the current day and at the end
> of
> the day they are truncated. Each table has about 20 columns. some are
> decimal
> fields some are datatime columns and others are integer and char type
> which
> includes many char(1)'s and two/three columns char(15 to 20)). One of the
> char(15) is indexed which is a kind of Id but is not unique there are no
> relationships between these tables and other tables. No triggers no views
> and
> anything as such. The data comes into to the system gets inserted to these
> standalone tables using some stored procedures. And these tables are
> queried
> using some other stored procedures. The major problem to me looks like is
> because of the page splits that it is generating and the dirty pages that
> it
> is generating(about 10000 dirty pages). Is there any thing that can be
> done
> on the table or anything else to make things perform better?
> Thanks in advance for your suggestion.
> "Andrew J. Kelly" wrote:
>|||David is correct but I just want to caution that changing the clustered
index to a nonclustered will not remove page splits. It may reduce them but
a nonclustered index is implemented just like a clustered index and can page
split as well.
Andrew J. Kelly SQL MVP
"David Gugick" <david.gugick-nospam@.quest.com> wrote in message
news:eC3cUzslFHA.3316@.TK2MSFTNGP14.phx.gbl...
> Srini wrote:
> The reason it would help to see the DDL is because page splits are a
> result of a clustered index and inserts that are not in clustered index
> order. You can eliminate the page splitting by chaning the clustered index
> to non-clustered or inserting the data in clustered index key order (if
> that's possible).
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com|||If we try to insert the data in clustered index order, which sounds to me
like a monotonically increasing clustered index, would n't it create hot
spots on the disk there by reducing the throughput? Currently we can't
control the data insert order. But I think I can change it so that I can
create clustered index on the serial number which is sequential and that is
generated by me so the data insertions will be in the clustered index order.
How about the dirty pages created by the other non clustered indexes? When I
run DBCC MEMUSAGE it is showing lot of dirty pages on the pages related to
the non-clustered indexes. How can I reduce the dirty pages on those? I
understand FILLFACTOR will not help here as that option is useful when there
is some data in the table and we are creating indexes on that table.
Thanks.
"David Gugick" wrote:

> Srini wrote:
> The reason it would help to see the DDL is because page splits are a
> result of a clustered index and inserts that are not in clustered index
> order. You can eliminate the page splitting by chaning the clustered
> index to non-clustered or inserting the data in clustered index key
> order (if that's possible).
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>|||Is there any limit on the number of repplies that one can post in a time
frame... This thing is not letting me post mine. Trying to POST again...
I agree SAN may have some issues we are trying to check on that. But by just
looking at the SQL server front 10000 dirty pages per checkpoint, using
default recovery interval(which is 60 seconds), looks like something can be
done there to reduce that huge number of dirty pages.
DDL looks like this:
SET ANSI_PADDING ON
GO
CREATE TABLE
[dbo].[DAILY_DATA1](
[Event_id] [char] (18) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
--This is unique identifier for the data, currently clustered index is
created on this field
[Category_cd] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Type_cd] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Session_id] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Serial_nb] [int] NOT NULL , App generated serial number, which I can use to
create clustered index
[Order_ts] [datetime] NOT NULL ,
[Data_Id_tx] [char] (14) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
--This is the column that we have index on
.
.
.
[Description_tx] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
.
.
.
[Receipt_ts] [datetime] NOT NULL
CONSTRAINT [PK_DD_1_Evt_ID] PRIMARY KEY CLUSTERED
(
[Event_id]
) ON [PRIMARY]
) ON [PRIMARY]
END
GO
CREATE INDEX [IDX_DD_1_Data_Id_tx] ON [dbo].[Daily_Data1]([Data_Id_tx])
WITH FILLFACTOR = 90 ON [PRIMARY]
GO
Thanks.
"Andrew J. Kelly" wrote:

> Just because it is a SAN does not mean it is sufficient or configured
> properly for your application. I run across more issues SAN related simply
> because people tend to ignore the configuration in thinking it can handle
> what ever they need. The DDL was to see what we are dealing with and leav
e
> nothing to imagination. It only takes a second to script the table and
> indexes but it goes a long way towards letting us see what is actually
> there. Not just what you may think is relevant. This is especially true f
or
> the indexes.
> --
> Andrew J. Kelly SQL MVP
>
> "Srini" <Srini@.discussions.microsoft.com> wrote in message
> news:623BAACD-7754-4BF4-B9ED-5AA2EDC16F8B@.microsoft.com...
>
>|||I repplied to thsi in the morning but it did not get posted...
Interval option is set to default - not changed. Default is 60 seconds. If I
increase it, it is taking way long on the checkpoint. If I decrease it
CHCKPOINT occurs too frequently. Both are problematic.
The tables are not temporary but the data is. Data gets inserted as part of
the daily operations and will be truncated in the evening. All the columns
are set to fixed length CHAR fields(to their maximum possible lenghts). Ther
e
are only two/thress columns with CHAR(15), CHAR(18) and CHAR(10) all other
columns are integer, decimal, datetime, CHAR(1) type.
I need to find a way to reduce the number of dirty pages and the number of
page splits. How can I do that?
Thanks.
"Rajesh" wrote:
> Just give a try with the following info
> 1. Check the recovery interval option on the system.
> 2. make sure that the temporary tables have fixed size by using char
> rather than
> varchar therefore you can reduce the page spilts.
> HTH
> Regards
> Rajesh Peddireddy.
> "Srini" wrote:
>