Transparent Data Encryption
DATE/TIME
SQL Server 2008 introduces new date and time data types:
DATE—A date-only type
TIME—A time-only type
DATETIMEOFFSET—A time-zone-aware datetime type
DATETIME2—A datetime type with larger fractional seconds and
year range than the existing DATETIME type
The new data types enable applications to have separate data
and time types while providing large data ranges or user defined precision for
time values.
HIERARCHY ID
Enable database applications to model tree structures in a
more efficient way than currently possible. New system type HierarchyId can
store values that represent nodes in a hierarchy tree. This new type will be
implemented as a CLR UDT, and will expose several efficient and useful built-in
methods for creating and operating on hierarchy nodes with a flexible
programming model.
FILESTREAM Data
Allow large binary data to be stored directly in an NTFS
file system, while preserving an integral part of the database and maintaining
transactional consistency. Enable the scale-out of large binary data
traditionally managed by the database to be stored outside the database on more
cost-effective storage without compromise.
Integrated Full Text Search
Integrated Full Text Search makes the transition between
Text Search and relational data seamless, while enabling users to use the Text
Indexes to perform high-speed text searches on large text columns.
Sparse Columns
NULL data consumes no physical space, providing a highly
efficient way of managing empty data in a database. For example, Sparse Columns
allows object models that typically have numerous null values to be stored in a
SQL Server 2005 database without experiencing large space costs.
Large User-Defined Types
SQL Server 2008 eliminates the 8-KB limit for User-Defined
Types (UDTs), allowing users to dramatically expand the size of their UDTs.
Spatial Data Types
Build spatial capabilities into your applications by using
the support for spatial data.
Implement Round Earth solutions with the geography data
type. Use latitude and longitude coordinates to define areas on the Earth's
surface.
Implement Flat Earth solutions with the geometry data type.
Store polygons, points, and lines that are associated with projected planar
surfaces and naturally planar data, such as interior spaces.
Backup Compression
Keeping disk-based backups online is expensive and
time-consuming. With SQL Server 2008 backup compression, less storage is
required to keep backups online, and backups run significantly faster since
less disk I/O is required.
Partitioned Table Parallelism
Partitions enable organizations to manage large growing
tables more effectively by transparently breaking them into manageable blocks
of data. SQL Server 2008 builds on the advances of partitioning in SQL Server
2005 by improving the performance on large partitioned tables.
Star Join Query Optimizations
SQL Server 2008 provides improved query performance for
common data warehouse scenarios. Star Join Query optimizations reduce query
response time by recognizing data warehouse join patterns.
Grouping Sets
Grouping Sets is an extension to the GROUP BY clause that
lets users define multiple groupings in the same query. Grouping Sets produces
a single result set that is equivalent to a UNION ALL of differently grouped
rows, making aggregation querying and reporting easier and faster.
Change Data Capture
With Change Data Capture, changes are captured and placed in
change tables. It captures complete content of changes, maintains cross-table
consistency, and even works across schema changes. This enables organizations
to integrate the latest information into the data warehouse.
MERGE SQL Statement
With the introduction of the MERGE SQL Statement, developers
can more effectively handle common data warehousing scenarios, like checking
whether a row exists, and then executing an insert or update.
SQL Server Integration Services (SSIS) Pipeline Improvements
Data Integration packages can now scale more effectively,
making use of available resources and managing the largest enterprise-scale
workloads. The new design improves the scalability of runtime into multiple
processors.
SQL Server Integration Services (SSIS) Persistent Lookups
The need to perform lookups is one of the most common ETL
operations. This is especially prevalent in data warehousing, where fact
records need to use lookups to transform business keys to their corresponding
surrogates. SSIS increases the performance of lookups to support the largest
tables.
Analysis Scale and Performance
SQL Server 2008 drives broader analysis with enhanced
analytical capabilities and with more complex computations and aggregations.
New cube design tools help users streamline the development of the analysis
infrastructure enabling them to build solutions for optimized performance.
Block Computations
Block Computations provides a significant improvement in
processing performance enabling users to increase the depth of their
hierarchies and complexity of the computations.
Writeback
New MOLAP enabled writeback capabilities in SQL Server 2008
Analysis Services removes the need to query ROLAP partitions. This provides
users with enhanced writeback scenarios from within analytical applications
without sacrificing the traditional OLAP performance.
Enterprise Reporting Engine
Reports can easily be delivered throughout the organization,
both internally and externally, with simplified deployment and configuration.
This enables users to easily create and share reports of any size and
complexity.
Internet Report Deployment
Customers and suppliers can effortlessly be reached by
deploying reports over the Internet.
Manage Reporting Infrastructure
Increase supportability and the ability to control server
behaviour with memory management, infrastructure consolidation, and easier
configuration through a centralized store and API for all configuration
settings.
Report Builder Enhancements
Easily build ad-hoc and author reports with any structure
through Report Designer.
Forms Authentication Support
Support for Forms authentication enables users to choose
between Windows and Forms authentication.
Report Server Application Embedding
Report Server application embedding enables the URLs in
reports and subscriptions to point back to front-end applications.
Microsoft Office Integration
SQL Server 2008 provides new Word rendering that enables
users to consume reports directly from within Microsoft Office Word. In
addition, the existing Excel renderer has been greatly enhanced to accommodate
the support of features, like nested data regions, sub-reports, as well as
merged cell improvements. This lets users maintain layout fidelity and improves
the overall consumption of reports from Microsoft Office applications.
Predictive Analysis
SQL Server Analysis Services continues to deliver advanced
data mining technologies. Better Time Series support extends forecasting
capabilities. Enhanced Mining Structures deliver more flexibility to perform
focused analysis through filtering as well as to deliver complete information
in reports beyond the scope of the mining model. New cross-validation enables
confirmation of both accuracy and stability for results that you can trust.
Furthermore, the new features delivered with SQL Server 2008 Data Mining
Add-ins for Office 2007 empower every user in the organization with even more
actionable insight at the desktop.
Calling SSRS in
aspl.net
1. Register the
Microsoft.Reporting.WebForms into your ASPX page.
<%@
Register Assembly="Microsoft.ReportViewer.WebForms, Version=8.0.0.0,
Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %>
Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %>
2. After registering the assembly
you will be able to use ReportViewer control to
fetch reports on your ASPX
page. Below is the code for ReportViewer Control on ASPX page.
<rsweb:ReportViewer runat="server" ID="MyReportViewer" Width="100%" Height="700px">
</rsweb:ReportViewer>
3. Now, please find below code, which you
will need to write into your ASPX.cs file...
protected void Page_Load(object sender, EventArgs e)
{
MyReportViewer.ProcessingMode = Microsoft.Reporting.WebForms.ProcessingMode.Remote;
MyReportViewer.ServerReport.ReportServerUrl = new Uri("http://reportserver/erports"); // Report Server URL
MyReportViewer.ServerReport.ReportPath = "EmployeeReport"; // Report Name
MyReportViewer.ShowParameterPrompts = false;
MyReportViewer.ShowPrintButton = true;
// Below code demonstrate the Parameter passing method. User only if you have parameters into the reports.
Microsoft.Reporting.WebForms.ReportParameter[] reportParameterCollection = new Microsoft.Reporting.WebForms.ReportParameter[1];
reportParameterCollection[0] = new Microsoft.Reporting.WebForms.ReportParameter();
reportParameterCollection[0].Name = Perks.Constants.Pages.ClientAdmin.ShowInvoice.QueryStrings.INVOICEID;
reportParameterCollection[0].Values.Add(Request.QueryString[Perks.Constants.Pages.ClientAdmin.ShowInvoice.QueryStrings.INVOICEID].ToString());
MyReportViewer.ServerReport.SetParameters(reportParameterCollection);
MyReportViewer.ServerReport.Refresh();
}
Index in SQL Server
Only 1 allowed per table physically
rearranges the data in the table to confirm to the index constraints for use on
columns that are frequently searched for ranges of data for use on columns with
low selectivity.
Non-Clustered Index
Up to 249 allowed per table creates
a separate list of key values with pointers to the location of the data in the
data pages For use on columns that are searched for single values
A clustered index is a special type
of index that reorders the way records in the table are physically stored.
Therefore table can have only one clustered index. The leaf nodes of a
clustered index contain the data pages. A non-clustered index is a special type
of index in which the logical order of the index does not match the physical
stored order of the rows on disk. The leaf node of a non-clustered index does
not consist of the data pages. Instead, the leaf nodes contain index rows.
Included Column Index
(New in SQL Server 2005)
In SQL Server 2005, the
functionality of non-clustered indexes is extended by adding non-key columns to
the leaf level of the non-clustered index. Non-key columns can help to create
cover indexes. By including non-key columns, you can create non-clustered
indexes that cover more queries. The Database Engine does not consider non-key
columns when calculating the number of index key columns or index key size.
Non-key columns can be included in non-clustered index to avoid exceeding the
current index size limitations of a maximum of 16 key columns and a maximum
index key size of 900 bytes. Another advantage is that using non-key column in
index we can have index data types not allowed as index key columns generally.
In following example column Filename
is varchar(400), which will increase the size of the index key bigger than it
is allowed. If we still want to include in our cover index to gain performance
we can do it by using the Keyword INCLUDE.
USE AdventureWorks
GO
CREATE INDEX IX_Document_Title
ON Production.Document (Title, Revision)
INCLUDE (FileName)
Non-key columns can be included only in non-clustered indexes. Columns can’t be defined in both the key column and they INCLUDE list. Column names can’t be repeated in the INCLUDE list. Non-key columns can be dropped from a table only after the non-key index is dropped first. For Included Column Index to exist there must be at least one key column defined with a maximum of 16 key columns and 1023 included columns.
Avoid adding unnecessary columns.
Adding too many index columns, key or non-key as they will affect negatively on
performance. Fewer index rows will fit on a page. This could create I/O
increases and reduced cache efficiency. More disk space will be required to
store the index. Index maintenance may increase the time that it takes to
perform modifications, inserts, updates, or deletes, to the underlying table or
indexed view.Non-key columns can be included only in non-clustered indexes. Columns can’t be defined in both the key column and they INCLUDE list. Column names can’t be repeated in the INCLUDE list. Non-key columns can be dropped from a table only after the non-key index is dropped first. For Included Column Index to exist there must be at least one key column defined with a maximum of 16 key columns and 1023 included columns.
Both
primary key and unique enforce uniqueness of the column on which they are
defined. But by default primary key creates a clustered index on the column,
where as unique creates a nonclustered index by default. Another major
difference is that, primary key doesn't allow NULLs, but unique key allows one
NULL only.
Rows Count Methods
1.
SELECT row_count
FROM sys.dm_db_partition_stats
WHERE object_id=object_id('emp_master') and index_id<2
2.
SELECT rows
FROM sysindexes
WHERE id=object_id('emp_master') and rows>0
3.
sp_spaceused 'table_nmae'
4.
select count(@@rowcount) from emp_master
Highest Salary
select max(Emp_Sal) from Employee_Test
3rd Highest Salary
select min(Emp_Sal) from Employee_Test where Emp_Sal in
(select distinct top 3 Emp_Sal from Employee_Test order by Emp_Sal desc)
Deleting duplicate rows from a table
with T as
(
select * , row_number() over (partition by Emp_ID order by Emp_ID) as rank
from employee_test1
)
delete from T
where rank > 1
Query for Select Randam no. record from a Table
Select Top 1 *
from
(
Select EmpCode,EmpName
from Employee_Master
--Where EmpName like 'A%'
) abc
ORDER BY NEWID()
Shrink Databse
DBCC SHRINKDATABASE(0)
Insert Vaues in Identity Column manually by user
SET IDENTITY_INSERT YourTableName OFF
Query for find employee who is living in more than one citySELECT empname, fname, lname
FROM employee
WHERE empid IN
(SELECT empid FROM city GROUP BY empid
HAVING COUNT(empid) > 1))
Update Employee Master for Salary hike in the following conditions
UPDATE emp_master
SET emp_sal =
CASE
WHEN emp_sal > 0 AND emp_sal <= 20000 THEN (emp_sal * 1.01) WHEN emp_sal > 20000 THEN (emp_sal * 1.02)
END
Find top salary from two tables
SELECT TOP 1 sal FROM (
SELECT MAX(sal) AS sal FROM sal1
UNION
SELECT MAX(sal) AS sal FROM sal2
) a
ORDER BY sal DESC
Find 6th highest salary
SELECT TOP 1 salary FROM (
SELECT DISTINCT TOP 6 salary FROM employee
ORDER BY salary DESC) a
ORDER BY salary
Find duplicate rows in a table?
SELECT sid, mark, COUNT(*) AS Counter
FROM marks
GROUP BY sid, mark
HAVING (COUNT(*) > 1)
How many tables contains empno as a column in a database?
SELECT COUNT(*) AS Counter
FROM syscolumns
WHERE (name = 'empno')
Query for grand total salary, and total salaries of individual employees in one query.
SELECT empid, SUM(salary) AS salary FROM employee
GROUP BY empid WITH ROLLUP
ORDER BY empid
SELECT s.name, c.name AS coursename, m.sid, m.mark
FROM Marks m
INNER JOIN Student s ON m.sid = s.sid
INNER JOIN Course c ON m.cid = c.cid
WHERE m.mark =(SELECT MAX(Mark)
FROM Marks mm
WHERE mm.cID = m.cID)
What are triggers? How to invoke a trigger on demand?
You cannot create constraints in
table variables
You cannot create default values on
table variable columns
Statistics cannot be created against
table variables
Similarities with temporary tables include: Instantiated in tempdb
Clustered indexes can be created on
table variables and temporary tables
Both are logged in the transaction
log
Just as with temp and regular
tables, users can perform
Local and Global temporary
table in SQL SERVER
Query for Select Randam no. record from a Table
Select Top 1 *
from
(
Select EmpCode,EmpName
from Employee_Master
--Where EmpName like 'A%'
) abc
ORDER BY NEWID()
How to know how much space is consumed by each table in your database
EXEC sp_MSforeachtable @command1="EXEC sp_spaceused '?'" Shrink Databse
DBCC SHRINKDATABASE(0)
Insert Vaues in Identity Column manually by user
SET IDENTITY_INSERT TableName ON
INSERT INTO EmployeeTable(EmpCode)
VALUES('1234')
INSERT INTO EmployeeTable(EmpCode)
VALUES('1234')
SET IDENTITY_INSERT YourTableName OFF
Query for find employee who is living in more than one citySELECT empname, fname, lname
FROM employee
WHERE empid IN
(SELECT empid FROM city GROUP BY empid
HAVING COUNT(empid) > 1))
Update Employee Master for Salary hike in the following conditions
UPDATE emp_master
SET emp_sal =
CASE
WHEN emp_sal > 0 AND emp_sal <= 20000 THEN (emp_sal * 1.01) WHEN emp_sal > 20000 THEN (emp_sal * 1.02)
END
Find top salary from two tables
SELECT TOP 1 sal FROM (
SELECT MAX(sal) AS sal FROM sal1
UNION
SELECT MAX(sal) AS sal FROM sal2
) a
ORDER BY sal DESC
Find 6th highest salary
SELECT TOP 1 salary FROM (
SELECT DISTINCT TOP 6 salary FROM employee
ORDER BY salary DESC) a
ORDER BY salary
SELECT sid, mark, COUNT(*) AS Counter
FROM marks
GROUP BY sid, mark
HAVING (COUNT(*) > 1)
How many tables contains empno as a column in a database?
SELECT COUNT(*) AS Counter
FROM syscolumns
WHERE (name = 'empno')
Query for grand total salary, and total salaries of individual employees in one query.
SELECT empid, SUM(salary) AS salary FROM employee
GROUP BY empid WITH ROLLUP
ORDER BY empid
Three tables – student , course, marks – how do go @ finding name
of the students who got max marks in the diff courses.
of the students who got max marks in the diff courses.
SELECT s.name, c.name AS coursename, m.sid, m.mark
FROM Marks m
INNER JOIN Student s ON m.sid = s.sid
INNER JOIN Course c ON m.cid = c.cid
WHERE m.mark =(SELECT MAX(Mark)
FROM Marks mm
WHERE mm.cID = m.cID)
Inserted Keyword in Sql2008
create table personbackup
--person
(
id int primary key,
fname varchar(10) not null,
lname varchar(10) not null,
city varchar(10) not null
)
INSERT INTO person OUTPUT
INSERTED.City
VALUES(3,'Ratan','Gupta','Jaipur')
If you want to try the examples in this article, you
should first run the following script to create the Person table in a SQL Server database:
Output Keyword in Sql2008
update person
set
fname='atul',
lname='Kumar'
output
inserted.id,
deleted.fname,
deleted.lname,
inserted.city
where
id=3
Output Keyword in Sql2008
delete from person
output deleted.id,deleted.fname,deleted.lname,deleted.city
into
personbackup
where id=3
SElect * from person
select * from personbackup
THROW Keyword in Sql2008
Select * from Branch_master
SELECT * FROM Branch_master
WHERE brhcode = 22222
IF @@ROWCOUNT = 0
THROW 50001, 'No results found.', 1
Aggregate function with distinct Keyword in Sql2008
Select Max(distinct brhCode) from Branch_master
What are triggers? How to invoke a trigger on demand?
Triggers are special kind of stored procedures that get
executed automatically when an INSERT, UPDATE or DELETE operation takes
place on a table.
Triggers can't be invoked on demand. They get triggered
only when an associated action (INSERT, UPDATE, DELETE) happens on the
table on which they are defined.
Triggers are generally used to implement business rules,
auditing. Triggers can also be used to extend the referential integrity checks,
but wherever possible, use constraints for this purpose, instead of triggers,
as constraints are much faster.
MERGE Statement
One of the common tasks in database
management is the maintenance of lookup tables. Maintenance
of these
look-up tables are usually done using stored procedures
where the normal task
is to add the new record if it does not exist or update
the existing record if
it already exists based on the identified primary key of
the table. SQL
Server 2008 introduces a new statement called MERGE
statement which combines
these functionalities into just a single statement.
The MERGE statement
performs an INSERT, UPDATE or DELETE operation on a
target table based on the
results of a join with source table.
Sparse Column
SPARSE column are better at managing NULL and
ZERO values in SQL Server. It does not take any space in
database at all. If
column is created with SPARSE clause with it and it
contains ZERO or NULL it
will be take lesser space then regular column (without
SPARSE clause).
In SQL Server 2008 maximum column allowed per
table is 1024. All the SPARSE columns does not count to
this limit of 1024. The
maximum limit of SPARSE column is 100,000. In summary any
table can have
maximum of 100,000 SPARSE and 1024 regular columns.
CREATE TABLE UnSparsed(ID INT IDENTITY(1,1),
FirstCol INT,SecondCol VARCHAR(100),
ThirdCol SmallDateTime)
GO
CREATE TABLE Sparsed(ID INT IDENTITY(1,1),
FirstCol INT SPARSE,
SecondCol VARCHAR(100) SPARSE,
ThirdCol SmallDateTime SPARSE)
DATE/TIME Data Types
SQL Server 2008 introduces new date and time data types
DATE—A date-only type
TIME—A time-only type
DATETIMEOFFSET—A time-zone-aware datetime type
DATETIME2—A datetime type with larger fractional seconds
and
year range than the
existing DATETIME typE
Self
join is just like any other join, except that two instances of the same table
will be joined in the query.
1.
SELECT t1.empname [Employee],
t2.empname [Manager] FROM emp t1, emp t2 WHERE t1.mgrid = t2.empid
2.
SELECT t1.empname [Employee],
COALESCE (t2.empname, 'No manager') [Manager] FROM emp t1 LEFT OUTER JOIN emp
t2 ON t1.mgrid = t2.empid
Cursor
|
Differences between SQL
Server Temporary tables and Table variables
As you can see the syntax bridges
local variable declaration (DECLARE @variable_name variable_data_type)
and table creation (column_name,
data_type, nullability). As with any other local variable in T-SQL,
the table variable must be prefixed
with an "@" sign. Unlike temporary or regular table objects,
table variables have certain clear
limitations.
Table variables cannot have
Non-Clustered Indexes
all Data Modification Language (DML)
queries against a table
variable: SELECT, INSERT,
UPDATE, and DELETE.
1.Table variables (DECLARE @t TABLE)
are visible only to the connection
that creates it, and are deleted when the batch or stored procedure
ends.
2. Local temporary tables (CREATE
TABLE #t) are visible only to the connection that creates it, and are deleted when the connection is
closed.
3.Global temporary tables (CREATE
TABLE ##t) are visible to everyone,and are deleted when all connections that have referenced them have
closed.
Tempdb permanent tables
(USE tempdb CREATE TABLE t) are
visible to everyone, and are deleted when the server is restarted
Local:
1) denoted by # symbol.
2) valid for the current connection
only. They are cleared as soon as the curent connection closes.
3)cannot be shared between multiple
users.
global:
1)denoted by ## symbol.
2)Available to all the connections
once created. They are cleared when the last connection is closed.
3)can be shared betwen multiple users. Both of
then are stored in the tempdb databaseSQL OPTIMIZATION
Use views and stored procedures instead of heavy-duty queries. This can reduce network traffic, because your client will send to server only stored procedure or view name (perhaps with some parameters) instead of large heavy-duty queries text. This can be used to facilitate permission management also, because you can restrict user access to table columns they should not see. • Try to use constraints instead of triggers, whenever possible. Constraints are much more efficient than triggers and can boost performance. So, you should use constraints instead of triggers, whenever possible. • Use table variables instead of temporary tables. Table variables require less locking and logging resources than temporary tables, so table variables should be used whenever possible. The table variables are available in SQL Server 2000 only. • Try to use UNION ALL statement instead of UNION, whenever possible. The UNION ALL statement is much faster than UNION, because UNION ALL statement does not look for duplicate rows, and UNION statement does look for duplicate rows, whether or not they exist. • Try to avoid using the DISTINCT clause, whenever possible. Because using the DISTINCT clause will result in some performance degradation, you should use this clause only when it is necessary. • Try to avoid using SQL Server cursors, whenever possible. SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated sub-query or derived tables, if you need to perform row-by-row operations. • Try to avoid the HAVING clause, whenever possible. The HAVING clause is used to restrict the result set returned by the GROUP BY clause. When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the rows into sets of grouped rows and aggregates their values, and then the HAVING clause eliminates undesired aggregated groups. In many cases, you can write your select statement so, that it will contain only WHERE and GROUP BY clauses without HAVING clause. This can improve the performance of your query. • If you need to return the total table's row count, you can use alternative way instead of SELECT COUNT(*) statement. Because SELECT COUNT(*) statement make a full table scan to return the total table's row count, it can take very many time for the large table. There is another way to determine the total row count in a table. You can use sysindexes system table, in this case. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT(*): SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2 So, you can improve the speed of such queries in several times. • Include SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a T-SQL statement. This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a T-SQL statement. • Try to restrict the queries result set by using the WHERE clause. This can results in good performance benefits, because SQL Server will return to client only particular rows, not all rows from the table(s). This can reduce network traffic and boost the overall performance of the query. • Use the select statements with TOP keyword or the SET ROWCOUNT statement, if you need to return only the first n rows. This can improve performance of your queries, because the smaller result set will be returned. This can also reduce the traffic between the server and the clients. • Try to restrict the queries result set by returning only the particular columns from the table, not all table's columns. This can results in good performance benefits, because SQL Server will return to client only particular columns, not all table's columns. This can reduce network traffic and boost the overall performance of the query. 1.Indexes 2.avoid more number of triggers on the table 3.unnecessary complicated joins 4.correct use of Group by clause with the select list 5 In worst cases Denormalization Index Optimization • Every index increases the time in takes to perform INSERTS, UPDATES and DELETES, so the number of indexes should not be very much. Try to use maximum 4-5 indexes on one table, not more. If you have read-only table, then the number of indexes may be increased. • Keep your indexes as narrow as possible. This reduces the size of the index and reduces the number of reads required to read the index. • Try to create indexes on columns that have integer values rather than character values. • If you create a composite (multi-column) index, the order of the columns in the key are very important. Try to order the columns in the key as to enhance selectivity, with the most selective columns to the leftmost of the key. • If you want to join several tables, try to create surrogate integer keys for this purpose and create indexes on their columns. • Create surrogate integer primary key (identity for example) if your table will not have many insert operations. • Clustered indexes are more preferable than nonclustered, if you need to select by a range of values or you need to sort results set with GROUP BY or ORDER BY. • If your application will be performing the same query over and over on the same table, consider creating a covering index on the table. • You can use the SQL Server Profiler Create Trace Wizard with "Identify Scans of Large Tables" trace to determine which tables in your database may need indexes. This trace will show which tables are being scanned by queries instead of using an index. • You can use sp_MSforeachtable undocumented stored procedure to rebuild all indexes in your database. Try to schedule it to execute during CPU idle time and slow production periods. sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?')"
Delete Log file in Sql ServerDBCC SHRINKFILE("databasename_Log",1) BACKUP LOG databasename WITH TRUNCATE_ONLY DBCC SHRINKFILE("databasename_Log",1)Schedule daily backup for sql server 2005 datbase using sql agentScheduling a daily backup of sql server 2005 database using sql agent is very easy. By following below steps anyone can do that.
- Open sql server 2005 Management Studio and connect a server
- Expand the tree (left navigation) ”Management->Maintenance Plans”
- Right click on node Maintenance Plans and Click “Maintenance Plan Wizard” (As shown in figure below) which causes to open a wizard.
4. Follow the wizard5. In 2nd step of the wizard, You can schedule the task as clicking the Button “Change” as shown in the following6. Once you click the button “Change” you will get a new popup to enter shedule details as shown below.Normal 0 false false false EN-US X-NONE X-NONE7. Click OK. It will save the schedule info and close the popup. And click Next button to follow the next step.8. In this step you will be asked to check the check list for all the tasks those can be added in mainteance plan. Select “Backup Datbase (Full)” as shown in the figure.9. Follow the steps until you get following window. Once you get here Select one or more databases from the list of databases by clicking Dropdown for Databases.
10. From the above window, browse the folder location where to store the backup files.11. Continue the the steps until you get to Finish step.12. Click Finish. Once you click finish, the wizard will execute the steps to to create a job to schedule database. Immediately you will notice a job created in Sql Agent -> Jobs.13. As per the schedule, the job runs (Repeatedly or One time) and it creates backup file with name DB_NameMMddyyyyHHmm.bak (Ex: NorthWind060420081912.bak) in the specified folder (Borwsed in the above step).
SQL SERVER – PIVOT and UNPIVOT Table Examples
“If we PIVOT any table and UNPIVOT that table do we get our original table?”
I really think this is good question. Answers is Yes, you can but not always. When we pivot the table we use aggregated functions. If due to use of this function if data is aggregated, it will be not possible to get original data back.
Let me explain this issue demonstrating simple example.
USE
AdventureWorks
GO
-- Creating Test Table
CREATE TABLE
Product
(
Cust
VARCHAR
(
25
),
Product
VARCHAR
(
20
),
QTY
INT
)
GO
-- Inserting Data into
Table
INSERT INTO
Product
(
Cust
,
Product
,
QTY
)
VALUES
(
'KATE'
,
'VEG'
,
2
)
INSERT INTO
Product
(
Cust
,
Product
,
QTY
)
VALUES
(
'KATE'
,
'SODA'
,
6
)
INSERT INTO
Product
(
Cust
,
Product
,
QTY
)
VALUES
(
'KATE'
,
'MILK'
,
1
)
INSERT INTO
Product
(
Cust
,
Product
,
QTY
)
VALUES
(
'KATE'
,
'BEER'
,
12
)
INSERT INTO
Product
(
Cust
,
Product
,
QTY
)
VALUES
(
'FRED'
,
'MILK'
,
3
)
INSERT INTO
Product
(
Cust
,
Product
,
QTY
)
VALUES
(
'FRED'
,
'BEER'
,
24
)
INSERT INTO
Product
(
Cust
,
Product
,
QTY
)
VALUES
(
'KATE'
,
'VEG'
,
3
)
GO
-- Selecting and
checking entires in table
SELECT
*
FROM
Product
GO
-- Pivot Table ordered
by PRODUCT
SELECT
PRODUCT
,
FRED
,
KATE
FROM
(
SELECT
CUST
,
PRODUCT
,
QTY
FROM
Product
)
up
PIVOT
(
SUM
(
QTY
)
FOR
CUST
IN
(
FRED
,
KATE
))
AS
pvt
ORDER BY
PRODUCT
GO
-- Pivot Table ordered
by CUST
SELECT
CUST
,
VEG
,
SODA
,
MILK
,
BEER
,
CHIPS
FROM
(
SELECT
CUST
,
PRODUCT
,
QTY
FROM
Product
)
up
PIVOT
(
SUM
(
QTY
)
FOR
PRODUCT
IN
(
VEG
,
SODA
,
MILK
,
BEER
,
CHIPS
))
AS
pvt
ORDER BY
CUST
GO
-- Unpivot Table ordered
by CUST
SELECT
CUST
,
PRODUCT
,
QTY
FROM
(
SELECT
CUST
,
VEG
,
SODA
,
MILK
,
BEER
,
CHIPS
FROM
(
SELECT
CUST
,
PRODUCT
,
QTY
FROM
Product
)
up
PIVOT
(
SUM
(
QTY
)
FOR
PRODUCT
IN
(
VEG
,
SODA
,
MILK
,
BEER
,
CHIPS
))
AS
pvt
)
p
UNPIVOT
(
QTY
FOR
PRODUCT
IN
(
VEG
,
SODA
,
MILK
,
BEER
,
CHIPS
)
)
AS
Unpvt
GO
-- Clean up database
DROP TABLE
Product
GO
ResultSet:
-- Selecting and checking
entires in table
Cust Product QTY
-------------------------
-------------------- -----------
KATE VEG 2
KATE SODA
6
KATE MILK
1
KATE BEER
12
FRED MILK
3
FRED BEER
24
KATE VEG 3
-- Pivot Table ordered by
PRODUCT
PRODUCT FRED KATE
--------------------
----------- -----------
BEER 24
12
MILK 3 1
SODA NULL
6
VEG NULL
5
-- Pivot Table ordered by
CUST
CUST VEG SODA MILK BEER
CHIPS
-------------------------
----------- ----------- ----------- ----------- -----------
FRED NULL
NULL 3 24 NULL
KATE 5 6
1 12 NULL
-- Unpivot Table ordered by
CUST
CUST PRODUCT QTY
-------------------------
-------- -----------
FRED MILK
3
FRED BEER
24
KATE VEG 5
KATE SODA
6
KATE MILK
1
mStr = mStr
& "
with Abc as (select
Hub_Code,dbo.getHubName(Hub_Code)CSC_Name,[0305000001],[0305000004],[0305000005],"
mStr = mStr & "
[0305000006],[0312000014],[0312000015],[0312000016],[0312000017],[0313000009],"
mStr = mStr & "
[0313000010],[0316000001],[0316000002],[0317000001],[0317000002],[0317000003]
"
mStr = mStr & " from(select
d.Hub_Code,d.CSC_Name,b.ItemCode,sum(a.quantity)Qty"
mStr = mStr & " from WSFMCG.dbo.InvFoot a with(nolock)"
mStr = mStr & " inner join WSFMCG.dbo.mItem b with(nolock) on
a.ItemCode=b.ItemCode and Isnull(b.CardItem_YN,'N')='Y'"
mStr = mStr & " inner join WSFMCG.dbo.InvHead c with(nolock) on
a.invno=c.invno "
mStr = mStr & " inner join WSFMCG.dbo.csc_db_details d with(nolock)
on c.csc_hub_code=d.csc_code "
mStr = mStr & " left join Advance_Issued_Items_history e
with(nolock) on a.invno=e.invno and a.itemcode=e.itemcode "
'mStr =
mStr & " inner join dbo.Label_print t on t.custid=c.custid"
mStr = mStr & " where Isnull(a.ItemPending,'N')='Y' "
mStr = mStr & " and Isnull(e.invno,'')='' "
''mStr =
mStr & " and a.InvNo Not in(Select InvNo From
Advance_Issued_Items_history) "
If
HubStr <> "" Then
mStr = mStr & " and d.hub_code in(" & HubStr & ") "
End
If
mStr = mStr & " and
Convert(varchar,a.Created_Date,111)>='2012/11/01' "
'mStr =
mStr & " and Convert(varchar,t.ShoppingDate,111) between '" &
Format(DtpFromDate.Value, "yyyy/MM/dd") & "' And '"
& Format(DtpToDate.Value, "yyyy/MM/dd") & "'"
mStr = mStr & " and Convert(varchar,a.Created_Date,111) between
'" & Format(DtpFromDate.Value, "yyyy/MM/dd")
& "' And '" &
Format(DtpToDate.Value, "yyyy/MM/dd")
& "'"
mStr = mStr & " and Isnull(c.Active_YN,'N')='Y' "
mStr = mStr & " group by d.hub_Code,d.CSC_Name,b.ItemCode)tt "
mStr = mStr & " PIVOT (Sum(tt.Qty) For ItemCode in ([0305000001],[0305000004],[0305000005],[0305000006],[0312000014],"
mStr = mStr & "
[0312000015],[0312000016],[0312000017],[0313000009],[0313000010],[0316000001],[0316000002],"
mStr = mStr & " [0317000001],[0317000002],[0317000003]))as
tmp"
mStr = mStr & " )"
mStr = mStr & " select convert(bit,0)chk,
Hub_Code,max(CSC_Name)HUB_Name,"
mStr = mStr & " Isnull(Sum([0305000001]),0)[0305000001],"
mStr = mStr & " Isnull(Sum([0305000004]),0)[0305000004],Isnull(Sum([0305000005]),0)[0305000005],"
mStr = mStr & "
Isnull(Sum([0305000006]),0)[0305000006],Isnull(Sum([0312000014]),0)[0312000014],"
mStr = mStr & "
Isnull(Sum([0312000015]),0)[0312000015],Isnull(Sum([0312000016]),0)[0312000016],"
mStr = mStr & "
Isnull(Sum([0312000017]),0)[0312000017],Isnull(Sum([0313000009]),0)[0313000009],"
mStr = mStr & "
Isnull(Sum([0313000010]),0)[0313000010],Isnull(Sum([0316000001]),0)[0316000001],"
mStr = mStr & "
Isnull(Sum([0316000002]),0)[0316000002],Isnull(Sum([0317000001]),0)[0317000001],"
mStr = mStr & "
Isnull(Sum([0317000002]),0)[0317000002],Isnull(Sum([0317000003]),0)[0317000003]"
mStr = mStr & " from ABC a"
mStr = mStr & " group by a.Hub_Code
with ROLLUP "
'mStr =
mStr & " order by a.Hub_Code"
mStr = mStr & " union all"
mStr = mStr & " select convert(bit,0)chk,
Hub_Code,CSC_Name,[0305000001],[0305000004],[0305000005],"
mStr = mStr & "
[0305000006],[0312000014],[0312000015],[0312000016],[0312000017],[0313000009],"
mStr = mStr & "
[0313000010],[0316000001],[0316000002],[0317000001],[0317000002],[0317000003]
"
mStr = mStr & " from("
mStr = mStr & " select 0 Hub_Code,'STOCK'
CSC_Name,a.ItemCode,sum((a.totalQTy-a.issueqty))Qty"
mStr = mStr & " from dbo.Stock_Main_Item a with(nolock)"
mStr = mStr & " group by a.ItemCode)tt"
mStr = mStr & " PIVOT (Sum(tt.Qty) For ItemCode in
([0305000001],[0305000004],[0305000005],[0305000006],"
mStr = mStr & "
[0312000014],[0312000015],[0312000016],[0312000017],[0313000009],[0313000010],"
mStr = mStr & " [0316000001],[0316000002],[0317000001],[0317000002],[0317000003]))as
tmp1"
CREATE TABLE Product(Cust VARCHAR(25), Product VARCHAR(20), QTY INT)
GO
-- Inserting Data into Table
INSERT INTO Product(Cust, Product, QTY)
VALUES('MAHESH','VEG',2)
INSERT INTO Product(Cust, Product, QTY)
VALUES('MAHESH','SODA',6)
INSERT INTO Product(Cust, Product, QTY)
VALUES('MAHESH','MILK',1)
INSERT INTO Product(Cust, Product, QTY)
VALUES('MAHESH','BEER',12)
INSERT INTO Product(Cust, Product, QTY)
VALUES('KALU','MILK',3)
INSERT INTO Product(Cust, Product, QTY)
VALUES('KALU','BEER',24)
INSERT INTO Product(Cust, Product, QTY)
VALUES('MAHESH','VEG',3)
GO
-- Selecting and checking entires in table
SELECT *
FROM Product
GO
-- Pivot Table ordered by PRODUCT
SELECT PRODUCT, KALU, MAHESH
FROM (
SELECT CUST, PRODUCT, QTY
FROM Product)t
PIVOT (SUM(QTY) FOR CUST IN (KALU, MAHESH)) AS tt
ORDER BY PRODUCT
GO
-- Pivot Table ordered by CUST
SELECT CUST, VEG, SODA, MILK, BEER, CHIPS
FROM (
SELECT CUST, PRODUCT, QTY
FROM Product) t
PIVOT (SUM(QTY) FOR PRODUCT IN (VEG, SODA, MILK, BEER, CHIPS)) AS tt
ORDER BY CUST
SSRS REPORT EXAMPLE
Imports System.Data
Imports Microsoft.Reporting.WinForms
Public Class frmReportMaster
Dim mReportName As String
Dim mDs As DataSet
Dim _IsParameter As Boolean
Dim _ParameterStr As String
Private Sub frmReportMaster_Load(sender As Object, e As EventArgs) Handles MyBase.Load
Call ShowReports()
End Sub
Public Property IsParameter()
Get
IsParameter = _IsParameter
End Get
Set(value)
_IsParameter = value
End Set
End Property
Public Property ParameterString()
Get
ParameterString = _ParameterStr
End Get
Set(value)
_ParameterStr = value
End Set
End Property
Public Sub New(vReportName As String, vDs As DataSet)
InitializeComponent()
mReportName = vReportName
mDs = vDs
End Sub
Private Sub ShowReports()
Try
ReportPreview.LocalReport.DataSources.Clear()
ReportPreview.LocalReport.ReportPath = Application.StartupPath & "\RDLReports\" & mReportName
Call SetParameter(ReportPreview)
ReportPreview.LocalReport.DataSources.Add(New ReportDataSource("BMLDataSet", mDs.Tables(0)))
ReportPreview.RefreshReport()
Catch ex As Exception
MsgBox(ex.Message)
End Try
End Sub
Private Sub SetParameter(ByRef vReportViewer As ReportViewer)
Try
Dim i As Integer
Dim ParmObj As Object
Dim mStrValue As String
Dim mParamName As String
Dim mSingleParam As String
If IsParameter = True Then
If ParameterString.ToString.Length > 0 Then
ParmObj = ParameterString.Split(";")
If UBound(ParmObj) > 0 Then
For i = 0 To UBound(ParmObj)
mSingleParam = ParmObj(i)
If Trim(mSingleParam) <> "" Then
mParamName = mSingleParam.Split("=")(0)
mStrValue = mSingleParam.Split("=")(1)
vReportViewer.LocalReport.SetParameters(New ReportParameter(mParamName, mStrValue))
End If
Next
End If
End If
End If
IsParameter = False
Catch ex As Exception
MsgBox(ex.Message)
End Try
End Sub
--------------------------------------------------------------------------------
DEPLOYED REPORT
Private Sub ShowReports()
Try
Me.ReportViewer1.LocalReport.DataSources.Clear()
Me.ReportViewer1.LocalReport.ReportPath = Application.StartupPath & "\Reports\" & mReportName
Me.ReportViewer1.LocalReport.DataSources.Add(New ReportDataSource("DataSet1", mDs.Tables(0)))
Me.ReportViewer1.RefreshReport()
Catch ex As Exception
MsgBox(ex.Message)
End Try
End Sub
Private Sub ShowReports_Deployed()
Try
ReportViewer1.ProcessingMode = ProcessingMode.Remote
Dim myCred As Net.NetworkCredential = New Net.NetworkCredential("rsks", "ramjit23879")
ReportViewer1.ServerReport.ReportServerCredentials.NetworkCredentials = myCred
ReportViewer1.ServerReport.ReportServerUrl = New Uri("http://192.168.1.193/reportserver")
ReportViewer1.ServerReport.ReportPath = "/ProjReport/Report Project1/SReport"
Catch ex As Exception
MsgBox(ex.Message)
End Try
End Sub
No comments:
Post a Comment