Category Archives: Sql

Some useful Sql Azure Database Data Management Views (DMV)

I’ll just share some useful DMV’s for monitoring Sql Azure Database

Real Time Information

(it’s actually near-real time)

Run this queries against your Database

Real Time Session and Connection Information

SELECT
  s.session_id
  ,s.login_time
  ,s.host_name
  ,s.program_name
  ,s.login_name
  ,s.status
  ,s.cpu_time
  ,s.memory_usage
  ,s.total_scheduled_time
  ,s.total_elapsed_time
  ,s.last_request_start_time
  ,s.last_request_end_time
  ,s.reads   writes
  ,s.logical_reads
  ,st1.text
  , c.*
FROM sys.dm_exec_connections AS c
JOIN sys.dm_exec_sessions AS s
ON c.session_id = s.session_id
OUTER APPLY sys.dm_exec_sql_text(c.most_recent_sql_handle) as st1
order by c.num_reads desc

Using sys.dm_exec_connections and sys.dm_exec_sessions together you can gather some nice information about whats happening right now in your DB

Real Time Resource Usage Stats

select * from sys.dm_db_resource_stats

With this DMV we get information about avg resource usages(%cpu,%data_io,etc)

Real Time CPU And Query Plan Data

SELECT TOP 50 
qs.creation_time
, qs.last_execution_time
, qs.execution_count
, qs.total_worker_time as total_cpu_time
, qs.max_worker_time as max_cpu_time
, qs.total_elapsed_time
, qs.max_elapsed_time
, qs.total_logical_reads
, qs.max_logical_reads
, qs.total_physical_reads
, qs.max_physical_reads
,t.[text], qp.query_plan
, t.dbid
, t.objectid
, t.encrypted
, qs.plan_handle
, qs.plan_generation_num 
FROM sys.dm_exec_query_stats qs 
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS t 
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS qp 
ORDER BY qs.total_worker_time DESC

This DMV executed in the right time can give you a lot of high value information to troubleshoot performance issues.

Historical telemetry data

Run this queries against the master. They’re actually not run against your own master, they access Microsoft Meta-Databases. Some of this queries may take quite a lot time.

Historical connection data

select * from sys.database_connection_stats

Historical connection data

Historical resource usage

select * from sys.resource_stats

Very similar to sys.dm_db_resource_stats but with data from all your Db lifetime. Very helpful no identify recurrent periods of high DTU ussage

Advertisements

Remove Execution Plans from the Procedure Cache in Sql Azure

Currently Sql Azure does not support

DBCC FREEPROCCACHE

As a normal SqlServer instace would. So how can we clear the execution plan cache if we suspect that we have a bad one cached or for whatever other reason?

The solution

Use this:


SET NOCOUNT ON

DECLARE @lcl_name VARCHAR(100)
DECLARE @addcolumnSql nVARCHAR(MAX)
DECLARE @dropcolumnSql nVARCHAR(MAX)

DECLARE cur_name CURSOR FOR
SELECT name
FROM sysobjects
WHERE type = 'U'
OPEN cur_name
FETCH NEXT FROM cur_name INTO @lcl_name
WHILE @@Fetch_status = 0
BEGIN
set @addcolumnSql = 'alter table [' + @lcl_name + '] add temp_col_to_clear_exec_plan bit'
EXEcute sp_executesql @addcolumnSql
print @addcolumnSql
set @dropcolumnSql = 'alter table [' + @lcl_name + '] drop column temp_col_to_clear_exec_plan'
EXEcute sp_executesql @dropcolumnSql
print @dropcolumnSql
-- 	EXEC (@lcl_name )
FETCH NEXT FROM cur_name INTO @lcl_name
END
CLOSE cur_name
DEALLOCATE cur_name
SET NOCOUNT OFF

The explanation

What this basically does is add a temporaly bit column to each table on the database and then remove it (so we dont leave trash). Why do this? Because this we have a post in the official Sql Azure Team Blog that states that:

“if you make changes to the to a table or view referenced by the query (ALTER TABLE and ALTER VIEW) the plan will be removed from the cache”

We must use a cursor because SqlAzure also does not support sp_MSforeachtable

I got the cursor code to loop all tables from this link (but I had to modify it ’cause it didn’t do anything in Sql Azure)

http://blog.sqlauthority.com/2006/11/30/sql-server-cursor-to-process-tables-in-database-with-static-prefix-and-date-created/

Reduce nvarchar size on an indexed column in SqlServer

You migth encounter the situation that someone created a column thats way bigger than needed.
In my case it was an nvarchar(255) where a nvarchar(50) would suffice.

The column also had an index. You can’t “reduce” the size with a simple alter.

You have to:
– create a new column with the desired size
– “copy” the original values in the new column
– (set the new column not nullable) depends on the case
– drop the original index
– create the new index on the new column
– drop the “old” column
– rename the new column so it matches the expected name

Here’s the code

alter table SampleTable add Id1 nvarchar(50)

go

update SampleTable set Id1 = Id

alter table SampleTable alter column Id1 nvarchar(50) not null

go

ALTER TABLE [dbo].[SampleTable] DROP CONSTRAINT [PK_SampleTable]

go

ALTER TABLE [dbo].[SampleTable] ADD  CONSTRAINT [PK_SampleTable] PRIMARY KEY CLUSTERED
(
	Id1 ASC
)
go

alter table [SampleTable] drop column Id

go
exec sp_RENAME 'SampleTable.Id1', 'Id' , 'COLUMN'