Not all ViewData are created equal

Disclaimer: I believe that you should use strongly type views over ViewData / ViewBag. But I don’t think it should be a dogma. ViewData is a tool and you can use it if you think it’s the best tool for the job.

The other day I was doing some experiments with ViewData / ViewBag (they are the same) and I was having some strange results

I found out (at least I think so) what was happening.

I was putting some data in the ViewData, and then I couldn’t find it.

The problem was that not all ViewData are created equal.

If you do this in a Razor view (.cshtml)

@{
ViewBag.SomeValue = "SomeValue"

var someValueFromViewBag = ViewBag.SomeValue;
var someValueFromHtmlViewBag = Html.ViewBag.SomeValue;
var someValueFromHtmlViewContextViewBag = Html.ViewContext.ViewBag.SomeValue
}

ONLY someValueFromViewBag WILL CONTAIN “SomeValue”. The other 2 will be null.

I had to use ILSpy (free Reflector) to understand what happened.

In the last example we were using 3 classes:

  • WebViewPage
  • HtmlHelper
  • ViewContext

Each of this have a ViewData property of type ViewDataDictionary. They are all diferent, independant dictionaries. If you add something one of them, the others won’t notice the change.

There is also other class that has the ViewData property:

  • ControllerBase.

It seems that the ControllerBase.ViewDataDictionary is used to fill WebViewPage, HtmlHelper and ViewContext.

So in this case:

public ActionResult Index()
{
  ViewBag.SomeValue = "FromController";
  return View();
}

@{
var someValueFromViewBag = ViewBag.SomeValue;
var someValueFromHtmlViewBag = Html.ViewBag.SomeValue;
var someValueFromHtmlViewContextViewBag = Html.ViewContext.ViewBag.SomeValue
}
In this case THEY WILL HAVE ALL THE SAME VALUE.

This behaviour may seem a bit strange. After reasoning about this, I think that MVC designers wanted you to isolate “inner” and “outer” data to avoid side effects.

Remove Execution Plans from the Procedure Cache in Sql Azure

Currently Sql Azure does not support

DBCC FREEPROCCACHE

As a normal SqlServer instace would. So how can we clear the execution plan cache if we suspect that we have a bad one cached or for whatever other reason?

The solution

Use this:


SET NOCOUNT ON

DECLARE @lcl_name VARCHAR(100)
DECLARE @addcolumnSql nVARCHAR(MAX)
DECLARE @dropcolumnSql nVARCHAR(MAX)

DECLARE cur_name CURSOR FOR
SELECT name
FROM sysobjects
WHERE type = 'U'
OPEN cur_name
FETCH NEXT FROM cur_name INTO @lcl_name
WHILE @@Fetch_status = 0
BEGIN
set @addcolumnSql = 'alter table [' + @lcl_name + '] add temp_col_to_clear_exec_plan bit'
EXEcute sp_executesql @addcolumnSql
print @addcolumnSql
set @dropcolumnSql = 'alter table [' + @lcl_name + '] drop column temp_col_to_clear_exec_plan'
EXEcute sp_executesql @dropcolumnSql
print @dropcolumnSql
-- 	EXEC (@lcl_name )
FETCH NEXT FROM cur_name INTO @lcl_name
END
CLOSE cur_name
DEALLOCATE cur_name
SET NOCOUNT OFF

The explanation

What this basically does is add a temporaly bit column to each table on the database and then remove it (so we dont leave trash). Why do this? Because this we have a post in the official Sql Azure Team Blog that states that:

“if you make changes to the to a table or view referenced by the query (ALTER TABLE and ALTER VIEW) the plan will be removed from the cache”

We must use a cursor because SqlAzure also does not support sp_MSforeachtable

I got the cursor code to loop all tables from this link (but I had to modify it ’cause it didn’t do anything in Sql Azure)

http://blog.sqlauthority.com/2006/11/30/sql-server-cursor-to-process-tables-in-database-with-static-prefix-and-date-created/

Reduce nvarchar size on an indexed column in SqlServer

You migth encounter the situation that someone created an index thats way bigger than needed.
In my case it was an nvarchar(255) where a nvarchar(50) would suffice.

The column also had an index. You can’t “reduce” the size with a simple alter.

You have to:
- create a new column with the desired size
- “copy” the original values in the new column
- (set the new column not nullable) depends on the case
- drop the original index
- create the new index on the new column
- drop the “old” column
- rename the new column so it matches the expected name

Here’s the code

alter table SampleTable add Id1 nvarchar(50)

go

update SampleTable set Id1 = Id

alter table SampleTable alter column Id1 nvarchar(50) not null

go

ALTER TABLE [dbo].[SampleTable] DROP CONSTRAINT [PK_SampleTable]

go

ALTER TABLE [dbo].[SampleTable] ADD  CONSTRAINT [PK_SampleTable] PRIMARY KEY CLUSTERED
(
	Id1 ASC
)
go

alter table [SampleTable] drop column Id

go
exec sp_RENAME 'SampleTable.Id1', 'Id' , 'COLUMN'

EntityFramework RefreshAll loaded entities from Database

This samples will talk about Entity Framework 4 (ObjectContext). I’ll show in a next post how to get this done with EF5 DbContext. It should be much easier.

With the default behaviour, once we get an entity from the Database, if it is changed in background, and even if you query the entity again, you will see the old values.

The easy way to solve this is two use very short lived context, and “reload” the entity in another context. If that is not posible (ex: you are using the context in a PerRequest basis, or Ioc Container Controlled) you have some options:

Refresh all loaded entities from Database

This code is simple and it can be very helpful, but you must be aware that it will reload ABSOLUTELLY ALL THE OBJECTS THAT YOU HAVE ALREADY QUERIES. If you queries many entites, it will have a negative performance impact

public void RefreshAll()
{
     // Get all objects in statemanager with entityKey 
     // (context.Refresh will throw an exception otherwise) 
     var refreshableObjects = (from entry in context.ObjectStateManager.GetObjectStateEntries(
                                                EntityState.Added 
                                               | EntityState.Deleted 
                                               | EntityState.Modified 
                                               | EntityState.Unchanged)
                                      where entry.EntityKey != null
                                      select entry.Entity);

     context.Refresh(RefreshMode.StoreWins, refreshableObjects);
}

Pros:

  • Very easy to use, once you have the code above :)

Cons:

  • Potentially, it could execute A LOT of queries in order to refresh a context that “used” many queries

Refresh specific entities

Let’s assume we have a Blog application for the example.


// First we need to Add the objects to our refresh list
var objectsToRefresh = new List<System.Object>();
objectsToRefresh.Add(blogPost);
objectsToRefresh.Add(blogPost.User);

foreach (var comment in blogPost.Comments)
{
    objectsToRefresh.Add(comment);
    objectsToRefresh.Add(comment.User);
 // etc
}
// Here it ended your application custom Code. Now you have to:

// Clean nulls and repeateds (context.Refresh will throw an exception otherwise)
var noNullsAndRepeateds = objectsToRefresh.Where(o => o != null).Distinct().ToList();

// Get only the entities that are being tracked by ObjectStateManager and have entityKey
// (context.Refresh will throw an exception otherwise) 
var finalList = (from e in entityContext.ObjectStateManager.GetObjectStateEntries(
                                               EntityState.Added 
                                               | EntityState.Deleted 
                                               | EntityState.Modified 
                                               | EntityState.Unchanged)		

		where e.EntityKey != null &&
                noNullsAndRepeateds.Contains(e.Entity)
		select e.Entity).ToList();

entityContext.Refresh(RefreshMode.StoreWins, finalList);

Pros

  • Granular queries
  • Easy to use, once you have the code above :)

Cons

  • The problem is if you have an aggregate or object complex graph.You need to “craft” code to refresh each aggregate.
  • You have to manually “inform” which objects to refresh

Use MergeOptions
There are many resources on the web on how to do this


var query = context.BlogPosts;
query.MergeOption = MergeOption.OverwriteChanges;
var blogPost = query.SingleOrDefault(b=&gt; b.Id == 1);

Pros

  • A good and recommended aproach.
  • Granular queries

Cons

  • Again, if you have an aggregate or object complex graph. You’ve probably had query it using multiple queries (ex: because when tunning and profiling you got better results). If this is the case you would need to specify the MergeOption for each query. That could be hard work
  • Manually refresh only affected objects
  • The way of specifying it in EF4 is a bit counter-intuitive

Entity Framework. View failed Sql sentence with actual parameters

The Problem

So, you are executing some Entity Framework code that seems ok and you are getting constaint exception when calling Save()

Then you think. “I’m going to watch whats going on. I’ll check EXACTLY the Sql sentence that’s being executed and find the Bug”. And you are going to desperate.

Intellitrace will show you the Sql sentence. But not the parameters

And there is an IQueryable.ToTraceString() but you have to modify your code AND it’s intented yo use with Queries, and you want to see a modifying sentence.

The solution: SqlProfiler

  1. Open SqlProfiler
  2. New Trace
  3. Choose Standart
  4. Event Selection
  5. Uncheck Audit, Existing Connection (and Stored procedure if you want)
  6. Check Show all Filters
  7. In TSQL Check ONLY:
  8. SQL:BatchStarting
  9. SQL:StmtStarting

sqlprofiler2

Why Starting and not any other event? Because if the sentence is failing (for example: a unique constraint) the “starting event” is the only one that records a trace, it will never complete.

Extra: when you have a query

In case you are executing a Query and you don’t have Sql Profiler, you can check this solution posted in stack overflow. I haven’t tried it myself but it looks fine. You have to modify your code, though

Serializing and deserializing inherited types with Json, anything you want

In my previouos post I’ve created a very simple, home made class to serialize / deserialize an object without needing to know it’s real type, so you can take really advantage of polymorphism.

But if you want a much powerfull solution that also enables you to deserialize lists and complex object graphs, I strongly recommend you the excellent NewtonSoft Json.

Get it with NuGet: Install-Package Newtonsoft.Json

If you really think about it why don’t serialize in JSON? It’s a really simple, powerful, light format and has the advantage that you can also explose it and read it from Javascript if you want without any conversion. Here are a few good reasons to use Json over Xml

So here’s the code, using the library.

Test Model

Here’s the test Model. I’ts pretty lame, I know…


public class Resource
{
    public string ResourceProperty { get; set; }
}

public class Video : Resource
{
   public string VideoProperty { get; set; }
}

First sample. Base class

Just serialize and deserialize an object, no big deal

// Deserialize and deserialize, no big deal
[TestMethod]
public void BaseClassTest()
{
   var resource = new Resource() { ResourceProperty = "Hola" };

   var resourceJSon = JsonConvert.SerializeObject(resource);
   var deserializedJson = JsonConvert.DeserializeObject(resourceJSon);

   Assert.AreEqual(deserializedJson.ResourceProperty, resource.ResourceProperty);
}

Second sample. Inherited class

Here’s the cool stuff:

// Here is the cool stuff. Serialize a derived class, and deserialize as the base class
// without loosing information
 [TestMethod]
 public void InheritedClassTest()
 {
    var video = new Video() { ResourceProperty = "Hola", VideoProperty="Video" };

    // Here is the trick!!
    // We tell the serializer to save the real type for each class
    var settings = new JsonSerializerSettings()
    {
       TypeNameHandling = TypeNameHandling.Objects
    };
    var resourceJSon = JsonConvert.SerializeObject(video, settings);

    // We must deserialize with the same settings
    var deserializedJson = JsonConvert.DeserializeObject<Resource>(resourceJSon, settings);

    Assert.AreEqual(deserializedJson.ResourceProperty, video.ResourceProperty);
    // We can cast to video with no problem
    var castedVideo = deserializedJson as Video;
    Assert.AreEqual(castedVideo.VideoProperty, video.VideoProperty);
    // sorry no polymorphism in this sample :P
 }

Internally what the serializer actually does when using the TypeNameHandling.Objects is to save the type of the object you are serializing,so it can “infer” the type when deserialazing. Just as I did in my previous article. (I swear that I didn’t copy this!!) :P

Third sample. List with base and derived class

And here’s the really cool stuff. You can also serialize a list and deserialize it without needing to know the real type of each element.

 // And this is really cool stuff
 // You can serialize for example a list
 [TestMethod]
 public void List_Test()
 {
    var resource = new Resource() { ResourceProperty = "Resource" };
    var video = new Video() { ResourceProperty = "Video", VideoProperty = "VideoMP4" };

   var list = new List<Resource>() { resource, video };

   // Again the important part are the settings
   var settings = new JsonSerializerSettings()
   {
      TypeNameHandling = TypeNameHandling.Objects
   };

   var serializedList = JsonConvert.SerializeObject(list, settings);

   var deserializedList = JsonConvert.DeserializeObject<List<Resource>> (serializedList, settings);

   // And we recover the information with NO data loss
   Assert.AreEqual("Resource", deserializedList[0].ResourceProperty);
   Assert.AreEqual("VideoMP4", ((Video)deserializedList[1]).VideoProperty);
}

Serializing and Deserializing inherited types in xml, simple objects

Hi there, I’m back. I’ll try to post much often now :)

The problem

Deserializing a derived class in Xml forces you to know the real type. This can be a big problem if you are trying to use polymorphism to solve a problem. If you need to know the concrete type then the value of polymorphism is completly lost.

Consider this example, you have the Resource class with an Update method. In the case of a video, it will create transcodings for múltiple browsers. In other cases, for example an image, it will optimize it for the web, and perhaps create a thumbnail. A simple class diagram might look like this:

The solution

Theory (really short)

The deserializer needs to now the real type in order to deserialize it. I propose to save this information or “discriminator” in the serialization in order to use it later.

Client code

Let me show you first the client code, It’s superb easy. The class name is InheritanceSerializer


 var originalVideo = new Video() { Name = "Sample", PendingTranscoding = true;
// First we seralize the video
 var xml = InheritanceSerializer.Serialize(originalVideo);

// then we deserialize it, no problem. It WILL have the video information
var deserializedVideo = InheritanceSerializer.Deserialize(xml) as Resource;

// Then we can perform any operation
deserializedVideo.Update();

// we COULD cast it if we want!!!
var deserializedVideo2 = deserializedVideo as Video

Solution code (finally!)

So here it is, actually really simple


    ///
    /// Allows to serialize / deserialize  objetcs and inheritance hierarchies
    /// WITHOUT knowing the type to serialize
    ///
    /// When serializing adds a "discriminator" based on the real type
    ///
    /// When deserializing "infers" the real type based on the discriminator saved during the serialization
    ///
    public class InheritanceSerializer
    {
        private const string DISCRIMINATOR = "discriminator";

        public static string Serialize(object info)
        {
            var serializer = new XmlSerializer(info.GetType());
            using (var stream = new MemoryStream())
            {
                // Serialize
                serializer.Serialize(stream, info);
                stream.Position = 0;

                // Open serialization output
                var xmlDocument = new XmlDocument();
                xmlDocument.Load(stream);
                // Add a "discriminador" based on the real type, to use it during deserialization
                var node = xmlDocument.CreateAttribute("", DISCRIMINATOR, "");
                node.InnerText = GetTypeFullName_With_AssemblyName_WihoutVersion(info.GetType());
                xmlDocument.DocumentElement.Attributes.Append(node);

                // return the xml with the discriminator
                return xmlDocument.OuterXml;
            }
        }

        public static object Deserialize(string xml)
        {
            var xmlDocument = new XmlDocument();
            xmlDocument.LoadXml(xml);

            // read "discriminator"
            var discriminator = xmlDocument.DocumentElement.Attributes[DISCRIMINATOR];
            var typeName = discriminator.InnerText;

            // now we know the real type based on the discriminator to deserialize
            var serializer = new XmlSerializer(Type.GetType(typeName));

            using (var stream = new MemoryStream(Encoding.ASCII.GetBytes(xml)))
            {
                return serializer.Deserialize(stream);
            }
        }

        public static string GetTypeFullName_With_AssemblyName_WihoutVersion(Type type)
        {
            return string.Format("{0},{1}", type.FullName, type.Assembly.GetName().Name);
        }
    }

When to use

This solution it’s quite simple and extremelly easy to use. You can use it for simple cases. Just copy & paste the code above and that’s it. If you want the discriminator to be serialized as a node you can edit the code or check out my Github repository. In the repository I have applied TDD and some patterns like Strategy and Factory to avoid duplication and keep the code simple and elegant, but is it’s just for practicing bit.

But again I warn you, use this solution for simple cases. It won’t work if you have nested objects with inheritance nor if you have a collection