Change Size and Tier of Azure Virtual Machine (VM) using PowerShell Set-AzureVMSize InstanceSize Valid Strings

I was creating a PowerShell script to Scale Virtual Machines, but I couldn’t find a single place with the valid InstanceSize Strings. I was specially interested in being able to change the VM Tier (Basic / Standard). So I came with this list, after executing Set-AzureVMSize with wrong parameters :P

Here’s the list with the valid inputs (at 12 November 2014):

  • ExtraSmall
  • Small
  • Medium
  • Large
  • ExtraLarge
  • A5
  • A6
  • A7
  • A8
  • A9
  • Basic_A0
  • Basic_A1
  • Basic_A2
  • Basic_A3
  • Basic_A4
  • Standard_D1
  • Standard_D2
  • Standard_D3
  • Standard_D4
  • Standard_D11
  • Standard_D12
  • Standard_D13
  • Standard_D14

Equivalency beetween new and old Naming:

  • ExtraSmall => Standard A0
  • Small => Standard A1
  • Medium => Standard A2
  • Large => Standard A3
  • ExtraLarge => Standard A4

If you want to scale to a Standard A3, you can use:

Get-AzureVM -ServiceName $vmName -Name $vmName | Set-AzureVMSize –InstanceSize “Large” | Update-AzureVM

As you can see, the InstanceSize string is used also to change the VM Tier (Basic or Standard). If we want to switch our Stanard A3 to a Basic A0, we can use:

Get-AzureVM -ServiceName $vmName -Name $vmName | Set-AzureVMSize –InstanceSize “Basic_A0″ | Update-AzureVM

The InstanceSize string is a bit messy, specially for the most used instances (Extra Small, Small, etc). I assume that this “messiness” if to provide backward compatibility

Summary and opinion about #IsTDDDead

(I normally only post in one language depending the circumnstances, but this IsTDDDead thing is big enought write in both spanish and english)

There is much buzz lately by the statements and blog posts of @dhh (creator of Ruby on Rails) about is TDD Dead?

This has led to some interesting conversations between @dhh , Kent Beck (creator of XP and TDD and eminence in the industry) and Martin Fowler ( eminence in the industry) about it.

Martin Fowler created a page dedicated to this hangouts:

While not finished, this series of lectures have been incredibly productive for me.

@ Dhh basically states that TDD applied in strictest manner (not even single line of production code should be written without the test first) leads to various problems:

• Test Induced Damage: design decisions that leads to decoupling components for the purpose of mocking. @Dhh thinks that the “over” decoupling for mocking causes “worse” and less understandable code

• Developer overconfidence: The fact that applying TDD the developer feels it no longer has to test your code in an exploratory manner

Kent Beck explains that TDD should be applied whenever possible, especially when we have clear input / outputs, but that there are parts of systems that feels unnatural and it is best to proceed creating tests afterwards. Fowler agrees to this argument and said that is what he uses.

He also explains that most Test Induced Damaged problems are not caused TDD itself, but by the method as Folwer calls “mockist” in which tends to decouple (sometimes too much) in order to test in isolation. In his excellent article Fowler explains the differences between the “classical” approach and “mockist”. The “classicist” tries to mock less and performs more tests might be called “integration” (I will later clarified this). The “mockist” tries to test everything in isolation. The article explains the advantages of both , being  the major disadvantage of the “mockist ” method, that the test code is more coupled with the production code and refactoring it (without changing the functionality, obviously) can break tests that should not. In any case, he explains that he knows excellent developers using the “mockist” method effectively.

Martin Fowler recently published an article in which  explains the definition of “Unit “. He states that he (and others) were criticized at the time by considering that unit is not necessarily a method or class. The Unit may be a set of classes, methods group or whatever the team who knows the problem can be considered unitary. This fits the “classical” approach I mentioned before.

We can draw some conclusions about the hangouts:

• ALL of three agree that the most important thing is to have SelfTestingCode and TDD is a way to get there.

• They also agree that creating tests in which Mocks return mocks is probably wrong and would lead to problems.

• We must be aware of the “classical” and “mockist” advantages and disadvantages

• Both Kent and Fowler are prefer “classic” approach

• TDD is not dead. It’s very, very effective and positive in certain situations. Depending on the problem and people can (should? ) use most of the time

• The fundamentalist TDD is one that can cause problems. Sometimes depending on the flow of development is not suitable for TDD. But like everything, depends on many factors and there are teams that can do it without problems.

• The definition of “Unit” does not have to be class or method. It is a team decision what the unit is.

In my personal opinion, I think @ dhh is perhaps somewhat extreme, but all the dialogue itself is very, very positive. I think that he is right in the sense that “fundamentalist” TDD can lead to the problems described. We must NOT apply “Fundamentalist TDD”. In general I do not make strict TDD and whenever I don’t “I felt bad”, thinking I’m doing something wrong. However the words of @dhh, Beck and Fowler lead me to see that it is completely natural. Yes, you need to try using a TDD approach, try starting the problem we are solving as a series of inputs and expected outputs (and unexpected). But when the “flow” of the solution does not permit it, we can proceed without TDD. The important thing is to have SelfTestingCode

In any case, I think that the most positive thing is watching that 3 industry giants may have their disagreements exposing their views, reminding us once again that there are many ways to make good software. The important thing is knowing all the opinions you can in order to make good decisions. And once taken a certain way to be consistent with it. Mixing 2 good solutions not always leads (I repeat NOT ALWAYS) to a better solution. Hence the importance of standards and coordination within a development team for them to follow a unified view on the general aspects of software architecture.

These are other interesting links about all these topics:

I’m looking forward the next hangout (the fourth)

Resumen y opiniones de los primeros 3 “Hangouts” sobre #IsTDDDead

Hay mucho revuelo últimamente por las declaraciones y posts de @dhh (creador de Ruby on Rails) que dice que TDD ha muerto y explica porque.

Esto ha llevado a una serie de interesantes conversaciones entre @dhh, Kent Beck (creador de XP y TDD y eminencia en la industria) y Martin Fowler (eminencia en la industria) al respecto.

Martin Fowler ha creado una página dedicada a estas charlas:

Si bien no han terminado, esta serie de charlas han sido increíblemente productivas para mí.

Básicamente @dhh expone que TDD aplicado de manera extricta (ni una sola línea de código de producción sin el test antes) lleva a distintos problemas:

  • Test Induced Damaged: decisiones de diseño para desacoplar todo con el motivo de poder mockear y probar todo en aislamiento. La sobre aplicación del desacoplamiento para @dhh causa al final un código “peor” y menos entendible
  • Sobreconfianza del desarrollador: El hecho de que aplicar TDD el programador se siente que ya no tiene que probar su código de manera exploratoria

Kent Beck le explica que en cierto sentido se malinterpreto TDD ya que en general siempre que se pueda conviene aplicarlo, sobre todo cuando tenemos claras las entradas / salidas, pero hay partes de sistemas que se siente antinatural y es mejor proceder creando tests luego. Fowler asiente a este razonamiento y dice que es el que él utiliza.

Aparte explica la mayoría de los problemas de Test Induced Damaged no son por TDD en sí, sino por el método como Folwer lo llama “mockista” en el cual se tiende a desacoplar (en algunos casos demasiado). En su excelente artículo Fowler explica las diferencias entre el enfoque “clásico” y “mockista”. El “clásico” intenta mockear lo menos posible y realiza tests que podríamos llamar más de “integración” (más adelante aclaro esto). El “mockista” intenta probar todo en aislamiento. En el artículo explica las ventajas de uno y otro, siendo sobre todo la mayor desventaja del método “mockista” que el código del test está más acoplado al código de producción y que al refactorizar (sin cambiar la funcionalidad) podemos romper tests que en realidad no se deberían. En cualquier caso explica que conoce excelentes desarrolladores que utilizan el método “mockista” efectivamente.

Martin Fowler publicó recientemente un artículo en el que explica la definición de “Unitario”. Justamente dice que el (y otros) fueron criticados en su momento por considerar que unitario no necesariamente es un método o una clase, sino que puede ser un conjunto de clases / conjunto de métodos o lo que sea que el equipo que conoce el problema considere que puede ser unitario. Esto cuadra con el enfoque “clásico” del que hablaba antes.

Podemos llegar a algunas conclusiones:

  • TODOS están de acuero que lo más importante es tener SelfTestingCode y que TDD es una manera de llegar hasta allí.
  • También están de acuerdo que crear Tests en los que Mocks devuelven Mocks seguramente estarán mal y llevaran a problemas.
  • Hay que ser conscientes de los enfoques “clásico” y “mockista”, sus ventajas e inconvenientes
  • Tanto Kent como Fowler son más partidarios del enfoque “clásico”
  • TDD no está muerto. Es muy muy efectivo y positivo en ciertas situaciones. Dependiendo del problema y las personas, se puede (debe?) utilizar la mayoría de las veces
  • El TDD fundamentalista es el que puede causar problemas. A veces dependiendo el flujo de desarrollo no es conveniente utilizarlo. Pero como todo, depende de muchos factores y hay equipos que pueden y prefieren hacerlo sin problemas.
  • La definición de “Unitario” no tiene que ser clase o método. En cada momento es decisión del equipo cual es la unidad.

En mi opinión personal, pienso que @dhh es tal vez algo extremista, pero todo el diálogo en sí es muy, muy positivo. Creo que tiene razón en que TDD “fundamentalista” puede llevar a los problemas que describe. No hay que ser fundamentalista del TDD. En general yo no hago TDD estricto y siempre que no lo hago “me sentía mal”, pensando que estoy haciendo algo mal. Sin embargo las palabras de @dhh, Beck y Fowler me llevan a ver que es algo completamente natural. Sí que hay que primero intentar utilizar un enfoque TDD, intentar partir el problema que estamos solucionando en una serie de entradas y salidas esperadas (y no esperadas). Pero cuando el “flujo” de la solución no lo permite, podemos proceder sin TDD. Lo importante es tener SelfTestingCode

En cualquier caso creo que lo más positivo es ver como 3 gigantes de la industria pueden tener sus discrepancias exponiendo sus puntos de vista, recordándonos una vez más que  hay muchas maneras de hacer buen software. Lo importante es conocer todos los puntos de vista para poder tomar buenas decisiones. Y una vez tomada cierto camino creo que hay que ser consistente porque mezclar 2 soluciones buenas no siempre (repito NO SIEMPRE) puede llevar a una mejor solución. Por eso la importancia de estándares y coordinación dentro de un equipo de desarrollo para que se siga una visión unificada sobre todo en aspectos generales de la arquitectura.

Dejo otros enlaces muy interesantes alrededor de todos estos temas:

Espero ansiosamente la próxima charla (la cuarta)


Not all ViewData are created equal

Disclaimer: I believe that you should use strongly type views over ViewData / ViewBag. But I don’t think it should be a dogma. ViewData is a tool and you can use it if you think it’s the best tool for the job.

The other day I was doing some experiments with ViewData / ViewBag (they are the same) and I was having some strange results

I found out (at least I think so) what was happening.

I was putting some data in the ViewData, and then I couldn’t find it.

The problem was that not all ViewData are created equal.

If you do this in a Razor view (.cshtml)

ViewBag.SomeValue = "SomeValue"

var someValueFromViewBag = ViewBag.SomeValue;
var someValueFromHtmlViewBag = Html.ViewBag.SomeValue;
var someValueFromHtmlViewContextViewBag = Html.ViewContext.ViewBag.SomeValue

ONLY someValueFromViewBag WILL CONTAIN “SomeValue”. The other 2 will be null.

I had to use ILSpy (free Reflector) to understand what happened.

In the last example we were using 3 classes:

  • WebViewPage
  • HtmlHelper
  • ViewContext

Each of this have a ViewData property of type ViewDataDictionary. They are all diferent, independant dictionaries. If you add something one of them, the others won’t notice the change.

There is also other class that has the ViewData property:

  • ControllerBase.

It seems that the ControllerBase.ViewDataDictionary is used to fill WebViewPage, HtmlHelper and ViewContext.

So in this case:

public ActionResult Index()
  ViewBag.SomeValue = "FromController";
  return View();

var someValueFromViewBag = ViewBag.SomeValue;
var someValueFromHtmlViewBag = Html.ViewBag.SomeValue;
var someValueFromHtmlViewContextViewBag = Html.ViewContext.ViewBag.SomeValue

This behaviour may seem a bit strange. After reasoning about this, I think that MVC designers wanted you to isolate “inner” and “outer” data to avoid side effects.

Remove Execution Plans from the Procedure Cache in Sql Azure

Currently Sql Azure does not support


As a normal SqlServer instace would. So how can we clear the execution plan cache if we suspect that we have a bad one cached or for whatever other reason?

The solution

Use this:


DECLARE @lcl_name VARCHAR(100)
DECLARE @dropcolumnSql nVARCHAR(MAX)

FROM sysobjects
WHERE type = 'U'
OPEN cur_name
FETCH NEXT FROM cur_name INTO @lcl_name
WHILE @@Fetch_status = 0
set @addcolumnSql = 'alter table [' + @lcl_name + '] add temp_col_to_clear_exec_plan bit'
EXEcute sp_executesql @addcolumnSql
print @addcolumnSql
set @dropcolumnSql = 'alter table [' + @lcl_name + '] drop column temp_col_to_clear_exec_plan'
EXEcute sp_executesql @dropcolumnSql
print @dropcolumnSql
-- 	EXEC (@lcl_name )
FETCH NEXT FROM cur_name INTO @lcl_name
CLOSE cur_name

The explanation

What this basically does is add a temporaly bit column to each table on the database and then remove it (so we dont leave trash). Why do this? Because this we have a post in the official Sql Azure Team Blog that states that:

“if you make changes to the to a table or view referenced by the query (ALTER TABLE and ALTER VIEW) the plan will be removed from the cache”

We must use a cursor because SqlAzure also does not support sp_MSforeachtable

I got the cursor code to loop all tables from this link (but I had to modify it ’cause it didn’t do anything in Sql Azure)

Reduce nvarchar size on an indexed column in SqlServer

You migth encounter the situation that someone created an index thats way bigger than needed.
In my case it was an nvarchar(255) where a nvarchar(50) would suffice.

The column also had an index. You can’t “reduce” the size with a simple alter.

You have to:
– create a new column with the desired size
– “copy” the original values in the new column
– (set the new column not nullable) depends on the case
– drop the original index
– create the new index on the new column
– drop the “old” column
– rename the new column so it matches the expected name

Here’s the code

alter table SampleTable add Id1 nvarchar(50)


update SampleTable set Id1 = Id

alter table SampleTable alter column Id1 nvarchar(50) not null


ALTER TABLE [dbo].[SampleTable] DROP CONSTRAINT [PK_SampleTable]


	Id1 ASC

alter table [SampleTable] drop column Id

exec sp_RENAME 'SampleTable.Id1', 'Id' , 'COLUMN'

EntityFramework RefreshAll loaded entities from Database

This samples will talk about Entity Framework 4 (ObjectContext). I’ll show in a next post how to get this done with EF5 DbContext. It should be much easier.

With the default behaviour, once we get an entity from the Database, if it is changed in background, and even if you query the entity again, you will see the old values.

The easy way to solve this is two use very short lived context, and “reload” the entity in another context. If that is not posible (ex: you are using the context in a PerRequest basis, or Ioc Container Controlled) you have some options:

Refresh all loaded entities from Database

This code is simple and it can be very helpful, but you must be aware that it will reload ABSOLUTELLY ALL THE OBJECTS THAT YOU HAVE ALREADY QUERIED. If you queried many entites, it will have a negative performance impact

public void RefreshAll()
     // Get all objects in statemanager with entityKey
     // (context.Refresh will throw an exception otherwise)
     var refreshableObjects = (from entry in context.ObjectStateManager.GetObjectStateEntries(
                                               | EntityState.Deleted
                                               | EntityState.Modified
                                               | EntityState.Unchanged)
                                      where entry.EntityKey != null
                                      select entry.Entity);

     context.Refresh(RefreshMode.StoreWins, refreshableObjects);


  • Very easy to use, once you have the code above :)


  • Potentially, it could execute A LOT of queries in order to refresh a context that “used” many queries

Refresh specific entities

Let’s assume we have a Blog application for the example.

// First we need to Add the objects to our refresh list
var objectsToRefresh = new List<System.Object>();

foreach (var comment in blogPost.Comments)
 // etc
// Here it ended your application custom Code. Now you have to:

// Clean nulls and repeateds (context.Refresh will throw an exception otherwise)
var noNullsAndRepeateds = objectsToRefresh.Where(o => o != null).Distinct().ToList();

// Get only the entities that are being tracked by ObjectStateManager and have entityKey
// (context.Refresh will throw an exception otherwise)
var finalList = (from e in entityContext.ObjectStateManager.GetObjectStateEntries(
                                               | EntityState.Deleted
                                               | EntityState.Modified
                                               | EntityState.Unchanged)		

		where e.EntityKey != null &&
		select e.Entity).ToList();

entityContext.Refresh(RefreshMode.StoreWins, finalList);


  • Granular queries
  • Easy to use, once you have the code above :)


  • The problem is if you have an aggregate or object complex graph.You need to “craft” code to refresh each aggregate.
  • You have to manually “inform” which objects to refresh

Use MergeOptions
There are many resources on the web on how to do this

var query = context.BlogPosts;
query.MergeOption = MergeOption.OverwriteChanges;
var blogPost = query.SingleOrDefault(b=&gt; b.Id == 1);


  • A good and recommended aproach.
  • Granular queries


  • Again, if you have an aggregate or object complex graph. You’ve probably had query it using multiple queries (ex: because when tunning and profiling you got better results). If this is the case you would need to specify the MergeOption for each query. That could be hard work
  • Manually refresh only affected objects
  • The way of specifying it in EF4 is a bit counter-intuitive