Copy all nuget packages into another project easy

Not the most elegant way but it’s quick and get the job done. This assumes both projects are on the same solution.

First get all the packages from the source project and save it to a variable:

$packages = Get-Package -ProjectName YourSourceProject

And then Install package on the new project.

Foreach($package in $packages) { Install-Package $package.Id -Version $package.Version -Project YourTargetProject}

IMPORTANT!!: You might need to execute the Foreach 2 or 3 times as some packages might fail on the first runs. (hence this is probably not the most elegant but as I said it’s quick and get’s the job done)

Automating IIs always running to auto start your application

Rick Strahl has an excellent article on how to configure IIS always running. He explains all the manual steps you have to do in order to do that. I’ll show you how to automate those steps. Usefull for example for azure cloudservices

Install Application Initialization Windows Feature

With this powershell, you install the feature “Application Initialization”.

Install-WindowsFeature -Name Web-AppInit

Set StartMode AlwaysRunning and PreloadEnabled

With this c# code you can create a small .exe to configure application initialization
You need to reference Microsoft.Web.Administration.dll located in %windir%\Windows\System32\inetsrv
There’s a nuget package with version from IIS7 that microsoft never updated anymore. I recommend to use the version you have installed.

 using (var serverManager = new ServerManager())
    var appPool = serverManager.ApplicationPools["YourAppPoolNameHere"];
    appPool.SetAttributeValue("startmode", "AlwaysRunning");
    appPool.SetAttributeValue("autostart", true);
    var site = serverManager.Sites["YourSiteNameHere"];
    // This code assumes that you have only one application
    site.Applications.First().SetAttributeValue("preloadEnabled", true);


And that’s it.

EntityFramework DbContext RefreshAll loaded entities from Database

In my previous post I’ve explained how to refresh all Entities using the ObjectContext API.

Using the newer, nicer, and recommended DbContextApi (actually ObjectContext API will be removed in EF7) the code becomes much simpler:

public void RefreshAll()
     foreach (var entity in ctx.ChangeTracker.Entries())

I must admin I haven’t tested this as much as the ObjectContext version.

Some useful Sql Azure Database Data Management Views (DMV)

I’ll just share some useful DMV’s for monitoring Sql Azure Database

Real Time Information

(it’s actually near-real time)

Run this queries against your Database

Real Time Session and Connection Information

  ,s.reads   writes
  , c.*
FROM sys.dm_exec_connections AS c
JOIN sys.dm_exec_sessions AS s
ON c.session_id = s.session_id
OUTER APPLY sys.dm_exec_sql_text(c.most_recent_sql_handle) as st1
order by c.num_reads desc

Using sys.dm_exec_connections and sys.dm_exec_sessions together you can gather some nice information about whats happening right now in your DB

Real Time Resource Usage Stats

select * from sys.dm_db_resource_stats

With this DMV we get information about avg resource usages(%cpu,%data_io,etc)

Real Time CPU And Query Plan Data

, qs.last_execution_time
, qs.execution_count
, qs.total_worker_time as total_cpu_time
, qs.max_worker_time as max_cpu_time
, qs.total_elapsed_time
, qs.max_elapsed_time
, qs.total_logical_reads
, qs.max_logical_reads
, qs.total_physical_reads
, qs.max_physical_reads
,t.[text], qp.query_plan
, t.dbid
, t.objectid
, t.encrypted
, qs.plan_handle
, qs.plan_generation_num 
FROM sys.dm_exec_query_stats qs 
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS t 
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS qp 
ORDER BY qs.total_worker_time DESC

This DMV executed in the right time can give you a lot of high value information to troubleshoot performance issues.

Historical telemetry data

Run this queries against the master. They’re actually not run against your own master, they access Microsoft Meta-Databases. Some of this queries may take quite a lot time.

Historical connection data

select * from sys.database_connection_stats

Historical connection data

Historical resource usage

select * from sys.resource_stats

Very similar to sys.dm_db_resource_stats but with data from all your Db lifetime. Very helpful no identify recurrent periods of high DTU ussage

Change Size and Tier of Azure Virtual Machine (VM) using PowerShell Set-AzureVMSize InstanceSize Valid Strings

I was creating a PowerShell script to Scale Virtual Machines, but I couldn’t find a single place with the valid InstanceSize Strings. I was specially interested in being able to change the VM Tier (Basic / Standard). So I came with this list, after executing Set-AzureVMSize with wrong parameters :P

Here’s the list with the valid inputs (at 12 November 2014):

  • ExtraSmall
  • Small
  • Medium
  • Large
  • ExtraLarge
  • A5
  • A6
  • A7
  • A8
  • A9
  • Basic_A0
  • Basic_A1
  • Basic_A2
  • Basic_A3
  • Basic_A4
  • Standard_D1
  • Standard_D2
  • Standard_D3
  • Standard_D4
  • Standard_D11
  • Standard_D12
  • Standard_D13
  • Standard_D14

Equivalency beetween new and old Naming:

  • ExtraSmall => Standard A0
  • Small => Standard A1
  • Medium => Standard A2
  • Large => Standard A3
  • ExtraLarge => Standard A4

If you want to scale to a Standard A3, you can use:

Get-AzureVM -ServiceName $vmName -Name $vmName | Set-AzureVMSize –InstanceSize “Large” | Update-AzureVM

As you can see, the InstanceSize string is used also to change the VM Tier (Basic or Standard). If we want to switch our Stanard A3 to a Basic A0, we can use:

Get-AzureVM -ServiceName $vmName -Name $vmName | Set-AzureVMSize –InstanceSize “Basic_A0” | Update-AzureVM

The InstanceSize string is a bit messy, specially for the most used instances (Extra Small, Small, etc). I assume that this “messiness” if to provide backward compatibility

Summary and opinion about #IsTDDDead

(I normally only post in one language depending the circumnstances, but this IsTDDDead thing is big enought write in both spanish and english)

There is much buzz lately by the statements and blog posts of @dhh (creator of Ruby on Rails) about is TDD Dead?

This has led to some interesting conversations between @dhh , Kent Beck (creator of XP and TDD and eminence in the industry) and Martin Fowler ( eminence in the industry) about it.

Martin Fowler created a page dedicated to this hangouts:

While not finished, this series of lectures have been incredibly productive for me.

@ Dhh basically states that TDD applied in strictest manner (not even single line of production code should be written without the test first) leads to various problems:

• Test Induced Damage: design decisions that leads to decoupling components for the purpose of mocking. @Dhh thinks that the “over” decoupling for mocking causes “worse” and less understandable code

• Developer overconfidence: The fact that applying TDD the developer feels it no longer has to test your code in an exploratory manner

Kent Beck explains that TDD should be applied whenever possible, especially when we have clear input / outputs, but that there are parts of systems that feels unnatural and it is best to proceed creating tests afterwards. Fowler agrees to this argument and said that is what he uses.

He also explains that most Test Induced Damaged problems are not caused TDD itself, but by the method as Folwer calls “mockist” in which tends to decouple (sometimes too much) in order to test in isolation. In his excellent article Fowler explains the differences between the “classical” approach and “mockist”. The “classicist” tries to mock less and performs more tests might be called “integration” (I will later clarified this). The “mockist” tries to test everything in isolation. The article explains the advantages of both , being  the major disadvantage of the “mockist ” method, that the test code is more coupled with the production code and refactoring it (without changing the functionality, obviously) can break tests that should not. In any case, he explains that he knows excellent developers using the “mockist” method effectively.

Martin Fowler recently published an article in which  explains the definition of “Unit “. He states that he (and others) were criticized at the time by considering that unit is not necessarily a method or class. The Unit may be a set of classes, methods group or whatever the team who knows the problem can be considered unitary. This fits the “classical” approach I mentioned before.

We can draw some conclusions about the hangouts:

• ALL of three agree that the most important thing is to have SelfTestingCode and TDD is a way to get there.

• They also agree that creating tests in which Mocks return mocks is probably wrong and would lead to problems.

• We must be aware of the “classical” and “mockist” advantages and disadvantages

• Both Kent and Fowler are prefer “classic” approach

• TDD is not dead. It’s very, very effective and positive in certain situations. Depending on the problem and people can (should? ) use most of the time

• The fundamentalist TDD is one that can cause problems. Sometimes depending on the flow of development is not suitable for TDD. But like everything, depends on many factors and there are teams that can do it without problems.

• The definition of “Unit” does not have to be class or method. It is a team decision what the unit is.

In my personal opinion, I think @ dhh is perhaps somewhat extreme, but all the dialogue itself is very, very positive. I think that he is right in the sense that “fundamentalist” TDD can lead to the problems described. We must NOT apply “Fundamentalist TDD”. In general I do not make strict TDD and whenever I don’t “I felt bad”, thinking I’m doing something wrong. However the words of @dhh, Beck and Fowler lead me to see that it is completely natural. Yes, you need to try using a TDD approach, try starting the problem we are solving as a series of inputs and expected outputs (and unexpected). But when the “flow” of the solution does not permit it, we can proceed without TDD. The important thing is to have SelfTestingCode

In any case, I think that the most positive thing is watching that 3 industry giants may have their disagreements exposing their views, reminding us once again that there are many ways to make good software. The important thing is knowing all the opinions you can in order to make good decisions. And once taken a certain way to be consistent with it. Mixing 2 good solutions not always leads (I repeat NOT ALWAYS) to a better solution. Hence the importance of standards and coordination within a development team for them to follow a unified view on the general aspects of software architecture.

These are other interesting links about all these topics:

I’m looking forward the next hangout (the fourth)

Resumen y opiniones de los primeros 3 “Hangouts” sobre #IsTDDDead

Hay mucho revuelo últimamente por las declaraciones y posts de @dhh (creador de Ruby on Rails) que dice que TDD ha muerto y explica porque.

Esto ha llevado a una serie de interesantes conversaciones entre @dhh, Kent Beck (creador de XP y TDD y eminencia en la industria) y Martin Fowler (eminencia en la industria) al respecto.

Martin Fowler ha creado una página dedicada a estas charlas:

Si bien no han terminado, esta serie de charlas han sido increíblemente productivas para mí.

Básicamente @dhh expone que TDD aplicado de manera extricta (ni una sola línea de código de producción sin el test antes) lleva a distintos problemas:

  • Test Induced Damaged: decisiones de diseño para desacoplar todo con el motivo de poder mockear y probar todo en aislamiento. La sobre aplicación del desacoplamiento para @dhh causa al final un código “peor” y menos entendible
  • Sobreconfianza del desarrollador: El hecho de que aplicar TDD el programador se siente que ya no tiene que probar su código de manera exploratoria

Kent Beck le explica que en cierto sentido se malinterpreto TDD ya que en general siempre que se pueda conviene aplicarlo, sobre todo cuando tenemos claras las entradas / salidas, pero hay partes de sistemas que se siente antinatural y es mejor proceder creando tests luego. Fowler asiente a este razonamiento y dice que es el que él utiliza.

Aparte explica la mayoría de los problemas de Test Induced Damaged no son por TDD en sí, sino por el método como Folwer lo llama “mockista” en el cual se tiende a desacoplar (en algunos casos demasiado). En su excelente artículo Fowler explica las diferencias entre el enfoque “clásico” y “mockista”. El “clásico” intenta mockear lo menos posible y realiza tests que podríamos llamar más de “integración” (más adelante aclaro esto). El “mockista” intenta probar todo en aislamiento. En el artículo explica las ventajas de uno y otro, siendo sobre todo la mayor desventaja del método “mockista” que el código del test está más acoplado al código de producción y que al refactorizar (sin cambiar la funcionalidad) podemos romper tests que en realidad no se deberían. En cualquier caso explica que conoce excelentes desarrolladores que utilizan el método “mockista” efectivamente.

Martin Fowler publicó recientemente un artículo en el que explica la definición de “Unitario”. Justamente dice que el (y otros) fueron criticados en su momento por considerar que unitario no necesariamente es un método o una clase, sino que puede ser un conjunto de clases / conjunto de métodos o lo que sea que el equipo que conoce el problema considere que puede ser unitario. Esto cuadra con el enfoque “clásico” del que hablaba antes.

Podemos llegar a algunas conclusiones:

  • TODOS están de acuero que lo más importante es tener SelfTestingCode y que TDD es una manera de llegar hasta allí.
  • También están de acuerdo que crear Tests en los que Mocks devuelven Mocks seguramente estarán mal y llevaran a problemas.
  • Hay que ser conscientes de los enfoques “clásico” y “mockista”, sus ventajas e inconvenientes
  • Tanto Kent como Fowler son más partidarios del enfoque “clásico”
  • TDD no está muerto. Es muy muy efectivo y positivo en ciertas situaciones. Dependiendo del problema y las personas, se puede (debe?) utilizar la mayoría de las veces
  • El TDD fundamentalista es el que puede causar problemas. A veces dependiendo el flujo de desarrollo no es conveniente utilizarlo. Pero como todo, depende de muchos factores y hay equipos que pueden y prefieren hacerlo sin problemas.
  • La definición de “Unitario” no tiene que ser clase o método. En cada momento es decisión del equipo cual es la unidad.

En mi opinión personal, pienso que @dhh es tal vez algo extremista, pero todo el diálogo en sí es muy, muy positivo. Creo que tiene razón en que TDD “fundamentalista” puede llevar a los problemas que describe. No hay que ser fundamentalista del TDD. En general yo no hago TDD estricto y siempre que no lo hago “me sentía mal”, pensando que estoy haciendo algo mal. Sin embargo las palabras de @dhh, Beck y Fowler me llevan a ver que es algo completamente natural. Sí que hay que primero intentar utilizar un enfoque TDD, intentar partir el problema que estamos solucionando en una serie de entradas y salidas esperadas (y no esperadas). Pero cuando el “flujo” de la solución no lo permite, podemos proceder sin TDD. Lo importante es tener SelfTestingCode

En cualquier caso creo que lo más positivo es ver como 3 gigantes de la industria pueden tener sus discrepancias exponiendo sus puntos de vista, recordándonos una vez más que  hay muchas maneras de hacer buen software. Lo importante es conocer todos los puntos de vista para poder tomar buenas decisiones. Y una vez tomada cierto camino creo que hay que ser consistente porque mezclar 2 soluciones buenas no siempre (repito NO SIEMPRE) puede llevar a una mejor solución. Por eso la importancia de estándares y coordinación dentro de un equipo de desarrollo para que se siga una visión unificada sobre todo en aspectos generales de la arquitectura.

Dejo otros enlaces muy interesantes alrededor de todos estos temas:

Espero ansiosamente la próxima charla (la cuarta)


Not all ViewData are created equal

Disclaimer: I believe that you should use strongly type views over ViewData / ViewBag. But I don’t think it should be a dogma. ViewData is a tool and you can use it if you think it’s the best tool for the job.

The other day I was doing some experiments with ViewData / ViewBag (they are the same) and I was having some strange results

I found out (at least I think so) what was happening.

I was putting some data in the ViewData, and then I couldn’t find it.

The problem was that not all ViewData are created equal.

If you do this in a Razor view (.cshtml)

ViewBag.SomeValue = "SomeValue"

var someValueFromViewBag = ViewBag.SomeValue;
var someValueFromHtmlViewBag = Html.ViewBag.SomeValue;
var someValueFromHtmlViewContextViewBag = Html.ViewContext.ViewBag.SomeValue

ONLY someValueFromViewBag WILL CONTAIN “SomeValue”. The other 2 will be null.

I had to use ILSpy (free Reflector) to understand what happened.

In the last example we were using 3 classes:

  • WebViewPage
  • HtmlHelper
  • ViewContext

Each of this have a ViewData property of type ViewDataDictionary. They are all diferent, independant dictionaries. If you add something one of them, the others won’t notice the change.

There is also other class that has the ViewData property:

  • ControllerBase.

It seems that the ControllerBase.ViewDataDictionary is used to fill WebViewPage, HtmlHelper and ViewContext.

So in this case:

public ActionResult Index()
  ViewBag.SomeValue = "FromController";
  return View();

var someValueFromViewBag = ViewBag.SomeValue;
var someValueFromHtmlViewBag = Html.ViewBag.SomeValue;
var someValueFromHtmlViewContextViewBag = Html.ViewContext.ViewBag.SomeValue

This behaviour may seem a bit strange. After reasoning about this, I think that MVC designers wanted you to isolate “inner” and “outer” data to avoid side effects.

Remove Execution Plans from the Procedure Cache in Sql Azure

Currently Sql Azure does not support


As a normal SqlServer instace would. So how can we clear the execution plan cache if we suspect that we have a bad one cached or for whatever other reason?

The solution

Use this:


DECLARE @lcl_name VARCHAR(100)
DECLARE @dropcolumnSql nVARCHAR(MAX)

FROM sysobjects
WHERE type = 'U'
OPEN cur_name
FETCH NEXT FROM cur_name INTO @lcl_name
WHILE @@Fetch_status = 0
set @addcolumnSql = 'alter table [' + @lcl_name + '] add temp_col_to_clear_exec_plan bit'
EXEcute sp_executesql @addcolumnSql
print @addcolumnSql
set @dropcolumnSql = 'alter table [' + @lcl_name + '] drop column temp_col_to_clear_exec_plan'
EXEcute sp_executesql @dropcolumnSql
print @dropcolumnSql
-- 	EXEC (@lcl_name )
FETCH NEXT FROM cur_name INTO @lcl_name
CLOSE cur_name

The explanation

What this basically does is add a temporaly bit column to each table on the database and then remove it (so we dont leave trash). Why do this? Because this we have a post in the official Sql Azure Team Blog that states that:

“if you make changes to the to a table or view referenced by the query (ALTER TABLE and ALTER VIEW) the plan will be removed from the cache”

We must use a cursor because SqlAzure also does not support sp_MSforeachtable

I got the cursor code to loop all tables from this link (but I had to modify it ’cause it didn’t do anything in Sql Azure)

Reduce nvarchar size on an indexed column in SqlServer

You migth encounter the situation that someone created a column thats way bigger than needed.
In my case it was an nvarchar(255) where a nvarchar(50) would suffice.

The column also had an index. You can’t “reduce” the size with a simple alter.

You have to:
– create a new column with the desired size
– “copy” the original values in the new column
– (set the new column not nullable) depends on the case
– drop the original index
– create the new index on the new column
– drop the “old” column
– rename the new column so it matches the expected name

Here’s the code

alter table SampleTable add Id1 nvarchar(50)


update SampleTable set Id1 = Id

alter table SampleTable alter column Id1 nvarchar(50) not null


ALTER TABLE [dbo].[SampleTable] DROP CONSTRAINT [PK_SampleTable]


	Id1 ASC

alter table [SampleTable] drop column Id

exec sp_RENAME 'SampleTable.Id1', 'Id' , 'COLUMN'