Remove Execution Plans from the Procedure Cache in Sql Azure

Currently Sql Azure does not support


As a normal SqlServer instace would. So how can we clear the execution plan cache if we suspect that we have a bad one cached or for whatever other reason?

The solution

Use this:


DECLARE @lcl_name VARCHAR(100)
DECLARE @dropcolumnSql nVARCHAR(MAX)

FROM sysobjects
WHERE type = 'U'
OPEN cur_name
FETCH NEXT FROM cur_name INTO @lcl_name
WHILE @@Fetch_status = 0
set @addcolumnSql = 'alter table [' + @lcl_name + '] add temp_col_to_clear_exec_plan bit'
EXEcute sp_executesql @addcolumnSql
print @addcolumnSql
set @dropcolumnSql = 'alter table [' + @lcl_name + '] drop column temp_col_to_clear_exec_plan'
EXEcute sp_executesql @dropcolumnSql
print @dropcolumnSql
-- 	EXEC (@lcl_name )
FETCH NEXT FROM cur_name INTO @lcl_name
CLOSE cur_name

The explanation

What this basically does is add a temporaly bit column to each table on the database and then remove it (so we dont leave trash). Why do this? Because this we have a post in the official Sql Azure Team Blog that states that:

“if you make changes to the to a table or view referenced by the query (ALTER TABLE and ALTER VIEW) the plan will be removed from the cache”

We must use a cursor because SqlAzure also does not support sp_MSforeachtable

I got the cursor code to loop all tables from this link (but I had to modify it ’cause it didn’t do anything in Sql Azure)


Reduce nvarchar size on an indexed column in SqlServer

You migth encounter the situation that someone created a column thats way bigger than needed.
In my case it was an nvarchar(255) where a nvarchar(50) would suffice.

The column also had an index. You can’t “reduce” the size with a simple alter.

You have to:
– create a new column with the desired size
– “copy” the original values in the new column
– (set the new column not nullable) depends on the case
– drop the original index
– create the new index on the new column
– drop the “old” column
– rename the new column so it matches the expected name

Here’s the code

alter table SampleTable add Id1 nvarchar(50)


update SampleTable set Id1 = Id

alter table SampleTable alter column Id1 nvarchar(50) not null


ALTER TABLE [dbo].[SampleTable] DROP CONSTRAINT [PK_SampleTable]


	Id1 ASC

alter table [SampleTable] drop column Id

exec sp_RENAME 'SampleTable.Id1', 'Id' , 'COLUMN'

EntityFramework RefreshAll loaded entities from Database

This samples will talk about Entity Framework 4 (ObjectContext). I’ll show in a next post how to get this done with EF5 DbContext. It should be much easier.

Update: here it is, finally:

Update2: based on feedback on stackoverflow I’ve found a bug in the code. I was actually trying to refresh a just added entity, which will result in an exception. So I’ve removed EntityState.Added from the GetObjectStateEntries parameters.

With the default behaviour, once we get an entity from the Database, if it is changed in background, and even if you query the entity again, you will see the old values.

The easy way to solve this is two use very short lived context, and “reload” the entity in another context. If that is not posible (ex: you are using the context in a PerRequest basis, or Ioc Container Controlled) you have some options:

Refresh all loaded entities from Database

This code is simple and it can be very helpful, but you must be aware that it will reload ABSOLUTELLY ALL THE OBJECTS THAT YOU HAVE ALREADY QUERIED. If you queried many entites, it will have a negative performance impact

public void RefreshAll()
     // Get all objects in statemanager with entityKey
     // (context.Refresh will throw an exception otherwise)
     var refreshableObjects = (from entry in context.ObjectStateManager.GetObjectStateEntries(
                                               | EntityState.Modified
                                               | EntityState.Unchanged)
                                      where entry.EntityKey != null
                                      select entry.Entity);

     context.Refresh(RefreshMode.StoreWins, refreshableObjects);


  • Very easy to use, once you have the code above :)


  • Potentially, it could execute A LOT of queries in order to refresh a context that “used” many queries

Refresh specific entities

Let’s assume we have a Blog application for the example.

// First we need to Add the objects to our refresh list
var objectsToRefresh = new List<System.Object>();

foreach (var comment in blogPost.Comments)
 // etc
// Here it ended your application custom Code. Now you have to:

// Clean nulls and repeateds (context.Refresh will throw an exception otherwise)
var noNullsAndRepeateds = objectsToRefresh.Where(o => o != null).Distinct().ToList();

// Get only the entities that are being tracked by ObjectStateManager and have entityKey
// (context.Refresh will throw an exception otherwise)
var finalList = (from e in entityContext.ObjectStateManager.GetObjectStateEntries(
                                               | EntityState.Modified
                                               | EntityState.Unchanged)		

		where e.EntityKey != null &&
		select e.Entity).ToList();

entityContext.Refresh(RefreshMode.StoreWins, finalList);


  • Granular queries
  • Easy to use, once you have the code above :)


  • The problem is if you have an aggregate or object complex graph.You need to “craft” code to refresh each aggregate.
  • You have to manually “inform” which objects to refresh

Use MergeOptions
There are many resources on the web on how to do this

var query = context.BlogPosts;
query.MergeOption = MergeOption.OverwriteChanges;
var blogPost = query.SingleOrDefault(b=> b.Id == 1);


  • A good and recommended aproach.
  • Granular queries


  • Again, if you have an aggregate or object complex graph. You’ve probably had query it using multiple queries (ex: because when tunning and profiling you got better results). If this is the case you would need to specify the MergeOption for each query. That could be hard work
  • Manually refresh only affected objects
  • The way of specifying it in EF4 is a bit counter-intuitive

Entity Framework. View failed Sql sentence with actual parameters

The Problem

So, you are executing some Entity Framework code that seems ok and you are getting constaint exception when calling Save()

Then you think. “I’m going to watch whats going on. I’ll check EXACTLY the Sql sentence that’s being executed and find the Bug”. And you are going to desperate.

Intellitrace will show you the Sql sentence. But not the parameters

And there is an IQueryable.ToTraceString() but you have to modify your code AND it’s intented yo use with Queries, and you want to see a modifying sentence.

The solution: SqlProfiler

  1. Open SqlProfiler
  2. New Trace
  3. Choose Standart
  4. Event Selection
  5. Uncheck Audit, Existing Connection (and Stored procedure if you want)
  6. Check Show all Filters
  7. In TSQL Check ONLY:
  8. SQL:BatchStarting
  9. SQL:StmtStarting


Why Starting and not any other event? Because if the sentence is failing (for example: a unique constraint) the “starting event” is the only one that records a trace, it will never complete.

Extra: when you have a query

In case you are executing a Query and you don’t have Sql Profiler, you can check this solution posted in stack overflow. I haven’t tried it myself but it looks fine. You have to modify your code, though

Serializing and deserializing inherited types with Json, anything you want

In my previouos post I’ve created a very simple, home made class to serialize / deserialize an object without needing to know it’s real type, so you can take really advantage of polymorphism.

But if you want a much powerfull solution that also enables you to deserialize lists and complex object graphs, I strongly recommend you the excellent NewtonSoft Json.

Get it with NuGet: Install-Package Newtonsoft.Json

If you really think about it why don’t serialize in JSON? It’s a really simple, powerful, light format and has the advantage that you can also explose it and read it from Javascript if you want without any conversion. Here are a few good reasons to use Json over Xml

So here’s the code, using the library.

Test Model

Here’s the test Model. I’ts pretty lame, I know…

public class Resource
    public string ResourceProperty { get; set; }

public class Video : Resource
   public string VideoProperty { get; set; }

First sample. Base class

Just serialize and deserialize an object, no big deal

// Deserialize and deserialize, no big deal
public void BaseClassTest()
   var resource = new Resource() { ResourceProperty = "Hola" };

   var resourceJSon = JsonConvert.SerializeObject(resource);
   var deserializedJson = JsonConvert.DeserializeObject(resourceJSon);

   Assert.AreEqual(deserializedJson.ResourceProperty, resource.ResourceProperty);

Second sample. Inherited class

Here’s the cool stuff:

// Here is the cool stuff. Serialize a derived class, and deserialize as the base class
// without loosing information
 public void InheritedClassTest()
    var video = new Video() { ResourceProperty = "Hola", VideoProperty="Video" };

    // Here is the trick!!
    // We tell the serializer to save the real type for each class
    var settings = new JsonSerializerSettings()
       TypeNameHandling = TypeNameHandling.Objects
    var resourceJSon = JsonConvert.SerializeObject(video, settings);

    // We must deserialize with the same settings
    var deserializedJson = JsonConvert.DeserializeObject<Resource>(resourceJSon, settings);

    Assert.AreEqual(deserializedJson.ResourceProperty, video.ResourceProperty);
    // We can cast to video with no problem
    var castedVideo = deserializedJson as Video;
    Assert.AreEqual(castedVideo.VideoProperty, video.VideoProperty);
    // sorry no polymorphism in this sample :P

Internally what the serializer actually does when using the TypeNameHandling.Objects is to save the type of the object you are serializing,so it can “infer” the type when deserialazing. Just as I did in my previous article. (I swear that I didn’t copy this!!) :P

Third sample. List with base and derived class

And here’s the really cool stuff. You can also serialize a list and deserialize it without needing to know the real type of each element.

 // And this is really cool stuff
 // You can serialize for example a list
 public void List_Test()
    var resource = new Resource() { ResourceProperty = "Resource" };
    var video = new Video() { ResourceProperty = "Video", VideoProperty = "VideoMP4" };

   var list = new List<Resource>() { resource, video };

   // Again the important part are the settings
   var settings = new JsonSerializerSettings()
      TypeNameHandling = TypeNameHandling.Objects

   var serializedList = JsonConvert.SerializeObject(list, settings);

   var deserializedList = JsonConvert.DeserializeObject<List<Resource>> (serializedList, settings);

   // And we recover the information with NO data loss
   Assert.AreEqual("Resource", deserializedList[0].ResourceProperty);
   Assert.AreEqual("VideoMP4", ((Video)deserializedList[1]).VideoProperty);

Serializing and Deserializing inherited types in xml, simple objects

Hi there, I’m back. I’ll try to post much often now :)

The problem

Deserializing a derived class in Xml forces you to know the real type. This can be a big problem if you are trying to use polymorphism to solve a problem. If you need to know the concrete type then the value of polymorphism is completly lost.

Consider this example, you have the Resource class with an Update method. In the case of a video, it will create transcodings for múltiple browsers. In other cases, for example an image, it will optimize it for the web, and perhaps create a thumbnail. A simple class diagram might look like this:

The solution

Theory (really short)

The deserializer needs to now the real type in order to deserialize it. I propose to save this information or “discriminator” in the serialization in order to use it later.

Client code

Let me show you first the client code, It’s superb easy. The class name is InheritanceSerializer

 var originalVideo = new Video() { Name = "Sample", PendingTranscoding = true;
// First we seralize the video
 var xml = InheritanceSerializer.Serialize(originalVideo);

// then we deserialize it, no problem. It WILL have the video information
var deserializedVideo = InheritanceSerializer.Deserialize(xml) as Resource;

// Then we can perform any operation

// we COULD cast it if we want!!!
var deserializedVideo2 = deserializedVideo as Video

Solution code (finally!)

So here it is, actually really simple

    /// Allows to serialize / deserialize  objetcs and inheritance hierarchies
    /// WITHOUT knowing the type to serialize
    /// When serializing adds a "discriminator" based on the real type
    /// When deserializing "infers" the real type based on the discriminator saved during the serialization
    public class InheritanceSerializer
        private const string DISCRIMINATOR = "discriminator";

        public static string Serialize(object info)
            var serializer = new XmlSerializer(info.GetType());
            using (var stream = new MemoryStream())
                // Serialize
                serializer.Serialize(stream, info);
                stream.Position = 0;

                // Open serialization output
                var xmlDocument = new XmlDocument();
                // Add a "discriminador" based on the real type, to use it during deserialization
                var node = xmlDocument.CreateAttribute("", DISCRIMINATOR, "");
                node.InnerText = GetTypeFullName_With_AssemblyName_WihoutVersion(info.GetType());

                // return the xml with the discriminator
                return xmlDocument.OuterXml;

        public static object Deserialize(string xml)
            var xmlDocument = new XmlDocument();

            // read "discriminator"
            var discriminator = xmlDocument.DocumentElement.Attributes[DISCRIMINATOR];
            var typeName = discriminator.InnerText;

            // now we know the real type based on the discriminator to deserialize
            var serializer = new XmlSerializer(Type.GetType(typeName));

            using (var stream = new MemoryStream(Encoding.ASCII.GetBytes(xml)))
                return serializer.Deserialize(stream);

        public static string GetTypeFullName_With_AssemblyName_WihoutVersion(Type type)
            return string.Format("{0},{1}", type.FullName, type.Assembly.GetName().Name);

When to use

This solution it’s quite simple and extremelly easy to use. You can use it for simple cases. Just copy & paste the code above and that’s it. If you want the discriminator to be serialized as a node you can edit the code or check out my Github repository. In the repository I have applied TDD and some patterns like Strategy and Factory to avoid duplication and keep the code simple and elegant, but is it’s just for practicing bit.

But again I warn you, use this solution for simple cases. It won’t work if you have nested objects with inheritance nor if you have a collection

Entity Framework 4, execute a writing stored procedure inside a TransactionScope

Suppose you have a stored procedure that performs multiple insert, update or delete. You may do this in for many reasons, probably for performance, to avoid reading a large object graph with no need. In my case I needed to delete a large object graph. So I’ve writed my powerful stored procedure and created a function import. Here’s a really simplified example:

using(var tx = new TransactionScope())
  using(var ctx = new EntityContext())

Seems ok isn’t it? Well you will get a TransactionAbortedException: “The transaction operation cannot be performed because there are pending requests working on this transaction.”

Why? Function import in EF are thought to be used to perform operations and retrieve values. It currently lacks the functionality of no return type (Yes there is the “None” return type option in the function import form, I encourage you to try it.). My Stored procedure was prepared for this, I return the rows affected.

Anyway, what can we do? We must evaluate the result, even if we don’t need it:

using(var tx = new TransactionScope())
  using(var ctx = new EntityContext())
     // Evaluating the result does the magic!!

Thats it. A bit tricky yes, but it works. I realised afterwards that in my case probably the right solution would be to map the store procedure to my entity remove operation, to remove the entity and the entire related object Graph. See Map Modification Functions to Stored Procedures
Many thanks to Danny Simmons for his answers.