Category Archives: Software

Summary and opinion about #IsTDDDead

(I normally only post in one language depending the circumnstances, but this IsTDDDead thing is big enought write in both spanish and english)

There is much buzz lately by the statements and blog posts of @dhh (creator of Ruby on Rails) about is TDD Dead?

This has led to some interesting conversations between @dhh , Kent Beck (creator of XP and TDD and eminence in the industry) and Martin Fowler ( eminence in the industry) about it.

Martin Fowler created a page dedicated to this hangouts:

While not finished, this series of lectures have been incredibly productive for me.

@ Dhh basically states that TDD applied in strictest manner (not even single line of production code should be written without the test first) leads to various problems:

• Test Induced Damage: design decisions that leads to decoupling components for the purpose of mocking. @Dhh thinks that the “over” decoupling for mocking causes “worse” and less understandable code

• Developer overconfidence: The fact that applying TDD the developer feels it no longer has to test your code in an exploratory manner

Kent Beck explains that TDD should be applied whenever possible, especially when we have clear input / outputs, but that there are parts of systems that feels unnatural and it is best to proceed creating tests afterwards. Fowler agrees to this argument and said that is what he uses.

He also explains that most Test Induced Damaged problems are not caused TDD itself, but by the method as Folwer calls “mockist” in which tends to decouple (sometimes too much) in order to test in isolation. In his excellent article Fowler explains the differences between the “classical” approach and “mockist”. The “classicist” tries to mock less and performs more tests might be called “integration” (I will later clarified this). The “mockist” tries to test everything in isolation. The article explains the advantages of both , being  the major disadvantage of the “mockist ” method, that the test code is more coupled with the production code and refactoring it (without changing the functionality, obviously) can break tests that should not. In any case, he explains that he knows excellent developers using the “mockist” method effectively.

Martin Fowler recently published an article in which  explains the definition of “Unit “. He states that he (and others) were criticized at the time by considering that unit is not necessarily a method or class. The Unit may be a set of classes, methods group or whatever the team who knows the problem can be considered unitary. This fits the “classical” approach I mentioned before.

We can draw some conclusions about the hangouts:

• ALL of three agree that the most important thing is to have SelfTestingCode and TDD is a way to get there.

• They also agree that creating tests in which Mocks return mocks is probably wrong and would lead to problems.

• We must be aware of the “classical” and “mockist” advantages and disadvantages

• Both Kent and Fowler are prefer “classic” approach

• TDD is not dead. It’s very, very effective and positive in certain situations. Depending on the problem and people can (should? ) use most of the time

• The fundamentalist TDD is one that can cause problems. Sometimes depending on the flow of development is not suitable for TDD. But like everything, depends on many factors and there are teams that can do it without problems.

• The definition of “Unit” does not have to be class or method. It is a team decision what the unit is.

In my personal opinion, I think @ dhh is perhaps somewhat extreme, but all the dialogue itself is very, very positive. I think that he is right in the sense that “fundamentalist” TDD can lead to the problems described. We must NOT apply “Fundamentalist TDD”. In general I do not make strict TDD and whenever I don’t “I felt bad”, thinking I’m doing something wrong. However the words of @dhh, Beck and Fowler lead me to see that it is completely natural. Yes, you need to try using a TDD approach, try starting the problem we are solving as a series of inputs and expected outputs (and unexpected). But when the “flow” of the solution does not permit it, we can proceed without TDD. The important thing is to have SelfTestingCode

In any case, I think that the most positive thing is watching that 3 industry giants may have their disagreements exposing their views, reminding us once again that there are many ways to make good software. The important thing is knowing all the opinions you can in order to make good decisions. And once taken a certain way to be consistent with it. Mixing 2 good solutions not always leads (I repeat NOT ALWAYS) to a better solution. Hence the importance of standards and coordination within a development team for them to follow a unified view on the general aspects of software architecture.

These are other interesting links about all these topics:

I’m looking forward the next hangout (the fourth)


Resumen y opiniones de los primeros 3 “Hangouts” sobre #IsTDDDead

Hay mucho revuelo últimamente por las declaraciones y posts de @dhh (creador de Ruby on Rails) que dice que TDD ha muerto y explica porque.

Esto ha llevado a una serie de interesantes conversaciones entre @dhh, Kent Beck (creador de XP y TDD y eminencia en la industria) y Martin Fowler (eminencia en la industria) al respecto.

Martin Fowler ha creado una página dedicada a estas charlas:

Si bien no han terminado, esta serie de charlas han sido increíblemente productivas para mí.

Básicamente @dhh expone que TDD aplicado de manera extricta (ni una sola línea de código de producción sin el test antes) lleva a distintos problemas:

  • Test Induced Damaged: decisiones de diseño para desacoplar todo con el motivo de poder mockear y probar todo en aislamiento. La sobre aplicación del desacoplamiento para @dhh causa al final un código “peor” y menos entendible
  • Sobreconfianza del desarrollador: El hecho de que aplicar TDD el programador se siente que ya no tiene que probar su código de manera exploratoria

Kent Beck le explica que en cierto sentido se malinterpreto TDD ya que en general siempre que se pueda conviene aplicarlo, sobre todo cuando tenemos claras las entradas / salidas, pero hay partes de sistemas que se siente antinatural y es mejor proceder creando tests luego. Fowler asiente a este razonamiento y dice que es el que él utiliza.

Aparte explica la mayoría de los problemas de Test Induced Damaged no son por TDD en sí, sino por el método como Folwer lo llama “mockista” en el cual se tiende a desacoplar (en algunos casos demasiado). En su excelente artículo Fowler explica las diferencias entre el enfoque “clásico” y “mockista”. El “clásico” intenta mockear lo menos posible y realiza tests que podríamos llamar más de “integración” (más adelante aclaro esto). El “mockista” intenta probar todo en aislamiento. En el artículo explica las ventajas de uno y otro, siendo sobre todo la mayor desventaja del método “mockista” que el código del test está más acoplado al código de producción y que al refactorizar (sin cambiar la funcionalidad) podemos romper tests que en realidad no se deberían. En cualquier caso explica que conoce excelentes desarrolladores que utilizan el método “mockista” efectivamente.

Martin Fowler publicó recientemente un artículo en el que explica la definición de “Unitario”. Justamente dice que el (y otros) fueron criticados en su momento por considerar que unitario no necesariamente es un método o una clase, sino que puede ser un conjunto de clases / conjunto de métodos o lo que sea que el equipo que conoce el problema considere que puede ser unitario. Esto cuadra con el enfoque “clásico” del que hablaba antes.

Podemos llegar a algunas conclusiones:

  • TODOS están de acuero que lo más importante es tener SelfTestingCode y que TDD es una manera de llegar hasta allí.
  • También están de acuerdo que crear Tests en los que Mocks devuelven Mocks seguramente estarán mal y llevaran a problemas.
  • Hay que ser conscientes de los enfoques “clásico” y “mockista”, sus ventajas e inconvenientes
  • Tanto Kent como Fowler son más partidarios del enfoque “clásico”
  • TDD no está muerto. Es muy muy efectivo y positivo en ciertas situaciones. Dependiendo del problema y las personas, se puede (debe?) utilizar la mayoría de las veces
  • El TDD fundamentalista es el que puede causar problemas. A veces dependiendo el flujo de desarrollo no es conveniente utilizarlo. Pero como todo, depende de muchos factores y hay equipos que pueden y prefieren hacerlo sin problemas.
  • La definición de “Unitario” no tiene que ser clase o método. En cada momento es decisión del equipo cual es la unidad.

En mi opinión personal, pienso que @dhh es tal vez algo extremista, pero todo el diálogo en sí es muy, muy positivo. Creo que tiene razón en que TDD “fundamentalista” puede llevar a los problemas que describe. No hay que ser fundamentalista del TDD. En general yo no hago TDD estricto y siempre que no lo hago “me sentía mal”, pensando que estoy haciendo algo mal. Sin embargo las palabras de @dhh, Beck y Fowler me llevan a ver que es algo completamente natural. Sí que hay que primero intentar utilizar un enfoque TDD, intentar partir el problema que estamos solucionando en una serie de entradas y salidas esperadas (y no esperadas). Pero cuando el “flujo” de la solución no lo permite, podemos proceder sin TDD. Lo importante es tener SelfTestingCode

En cualquier caso creo que lo más positivo es ver como 3 gigantes de la industria pueden tener sus discrepancias exponiendo sus puntos de vista, recordándonos una vez más que  hay muchas maneras de hacer buen software. Lo importante es conocer todos los puntos de vista para poder tomar buenas decisiones. Y una vez tomada cierto camino creo que hay que ser consistente porque mezclar 2 soluciones buenas no siempre (repito NO SIEMPRE) puede llevar a una mejor solución. Por eso la importancia de estándares y coordinación dentro de un equipo de desarrollo para que se siga una visión unificada sobre todo en aspectos generales de la arquitectura.

Dejo otros enlaces muy interesantes alrededor de todos estos temas:

Espero ansiosamente la próxima charla (la cuarta)


Serializing and deserializing inherited types with Json, anything you want

In my previouos post I’ve created a very simple, home made class to serialize / deserialize an object without needing to know it’s real type, so you can take really advantage of polymorphism.

But if you want a much powerfull solution that also enables you to deserialize lists and complex object graphs, I strongly recommend you the excellent NewtonSoft Json.

Get it with NuGet: Install-Package Newtonsoft.Json

If you really think about it why don’t serialize in JSON? It’s a really simple, powerful, light format and has the advantage that you can also explose it and read it from Javascript if you want without any conversion. Here are a few good reasons to use Json over Xml

So here’s the code, using the library.

Test Model

Here’s the test Model. I’ts pretty lame, I know…

public class Resource
    public string ResourceProperty { get; set; }

public class Video : Resource
   public string VideoProperty { get; set; }

First sample. Base class

Just serialize and deserialize an object, no big deal

// Deserialize and deserialize, no big deal
public void BaseClassTest()
   var resource = new Resource() { ResourceProperty = "Hola" };

   var resourceJSon = JsonConvert.SerializeObject(resource);
   var deserializedJson = JsonConvert.DeserializeObject(resourceJSon);

   Assert.AreEqual(deserializedJson.ResourceProperty, resource.ResourceProperty);

Second sample. Inherited class

Here’s the cool stuff:

// Here is the cool stuff. Serialize a derived class, and deserialize as the base class
// without loosing information
 public void InheritedClassTest()
    var video = new Video() { ResourceProperty = "Hola", VideoProperty="Video" };

    // Here is the trick!!
    // We tell the serializer to save the real type for each class
    var settings = new JsonSerializerSettings()
       TypeNameHandling = TypeNameHandling.Objects
    var resourceJSon = JsonConvert.SerializeObject(video, settings);

    // We must deserialize with the same settings
    var deserializedJson = JsonConvert.DeserializeObject<Resource>(resourceJSon, settings);

    Assert.AreEqual(deserializedJson.ResourceProperty, video.ResourceProperty);
    // We can cast to video with no problem
    var castedVideo = deserializedJson as Video;
    Assert.AreEqual(castedVideo.VideoProperty, video.VideoProperty);
    // sorry no polymorphism in this sample :P

Internally what the serializer actually does when using the TypeNameHandling.Objects is to save the type of the object you are serializing,so it can “infer” the type when deserialazing. Just as I did in my previous article. (I swear that I didn’t copy this!!) :P

Third sample. List with base and derived class

And here’s the really cool stuff. You can also serialize a list and deserialize it without needing to know the real type of each element.

 // And this is really cool stuff
 // You can serialize for example a list
 public void List_Test()
    var resource = new Resource() { ResourceProperty = "Resource" };
    var video = new Video() { ResourceProperty = "Video", VideoProperty = "VideoMP4" };

   var list = new List<Resource>() { resource, video };

   // Again the important part are the settings
   var settings = new JsonSerializerSettings()
      TypeNameHandling = TypeNameHandling.Objects

   var serializedList = JsonConvert.SerializeObject(list, settings);

   var deserializedList = JsonConvert.DeserializeObject<List<Resource>> (serializedList, settings);

   // And we recover the information with NO data loss
   Assert.AreEqual("Resource", deserializedList[0].ResourceProperty);
   Assert.AreEqual("VideoMP4", ((Video)deserializedList[1]).VideoProperty);

Serializing and Deserializing inherited types in xml, simple objects

Hi there, I’m back. I’ll try to post much often now :)

The problem

Deserializing a derived class in Xml forces you to know the real type. This can be a big problem if you are trying to use polymorphism to solve a problem. If you need to know the concrete type then the value of polymorphism is completly lost.

Consider this example, you have the Resource class with an Update method. In the case of a video, it will create transcodings for múltiple browsers. In other cases, for example an image, it will optimize it for the web, and perhaps create a thumbnail. A simple class diagram might look like this:

The solution

Theory (really short)

The deserializer needs to now the real type in order to deserialize it. I propose to save this information or “discriminator” in the serialization in order to use it later.

Client code

Let me show you first the client code, It’s superb easy. The class name is InheritanceSerializer

 var originalVideo = new Video() { Name = "Sample", PendingTranscoding = true;
// First we seralize the video
 var xml = InheritanceSerializer.Serialize(originalVideo);

// then we deserialize it, no problem. It WILL have the video information
var deserializedVideo = InheritanceSerializer.Deserialize(xml) as Resource;

// Then we can perform any operation

// we COULD cast it if we want!!!
var deserializedVideo2 = deserializedVideo as Video

Solution code (finally!)

So here it is, actually really simple

    /// Allows to serialize / deserialize  objetcs and inheritance hierarchies
    /// WITHOUT knowing the type to serialize
    /// When serializing adds a "discriminator" based on the real type
    /// When deserializing "infers" the real type based on the discriminator saved during the serialization
    public class InheritanceSerializer
        private const string DISCRIMINATOR = "discriminator";

        public static string Serialize(object info)
            var serializer = new XmlSerializer(info.GetType());
            using (var stream = new MemoryStream())
                // Serialize
                serializer.Serialize(stream, info);
                stream.Position = 0;

                // Open serialization output
                var xmlDocument = new XmlDocument();
                // Add a "discriminador" based on the real type, to use it during deserialization
                var node = xmlDocument.CreateAttribute("", DISCRIMINATOR, "");
                node.InnerText = GetTypeFullName_With_AssemblyName_WihoutVersion(info.GetType());

                // return the xml with the discriminator
                return xmlDocument.OuterXml;

        public static object Deserialize(string xml)
            var xmlDocument = new XmlDocument();

            // read "discriminator"
            var discriminator = xmlDocument.DocumentElement.Attributes[DISCRIMINATOR];
            var typeName = discriminator.InnerText;

            // now we know the real type based on the discriminator to deserialize
            var serializer = new XmlSerializer(Type.GetType(typeName));

            using (var stream = new MemoryStream(Encoding.ASCII.GetBytes(xml)))
                return serializer.Deserialize(stream);

        public static string GetTypeFullName_With_AssemblyName_WihoutVersion(Type type)
            return string.Format("{0},{1}", type.FullName, type.Assembly.GetName().Name);

When to use

This solution it’s quite simple and extremelly easy to use. You can use it for simple cases. Just copy & paste the code above and that’s it. If you want the discriminator to be serialized as a node you can edit the code or check out my Github repository. In the repository I have applied TDD and some patterns like Strategy and Factory to avoid duplication and keep the code simple and elegant, but is it’s just for practicing bit.

But again I warn you, use this solution for simple cases. It won’t work if you have nested objects with inheritance nor if you have a collection

Entity Framework 4, execute a writing stored procedure inside a TransactionScope

Suppose you have a stored procedure that performs multiple insert, update or delete. You may do this in for many reasons, probably for performance, to avoid reading a large object graph with no need. In my case I needed to delete a large object graph. So I’ve writed my powerful stored procedure and created a function import. Here’s a really simplified example:

using(var tx = new TransactionScope())
  using(var ctx = new EntityContext())

Seems ok isn’t it? Well you will get a TransactionAbortedException: “The transaction operation cannot be performed because there are pending requests working on this transaction.”

Why? Function import in EF are thought to be used to perform operations and retrieve values. It currently lacks the functionality of no return type (Yes there is the “None” return type option in the function import form, I encourage you to try it.). My Stored procedure was prepared for this, I return the rows affected.

Anyway, what can we do? We must evaluate the result, even if we don’t need it:

using(var tx = new TransactionScope())
  using(var ctx = new EntityContext())
     // Evaluating the result does the magic!!

Thats it. A bit tricky yes, but it works. I realised afterwards that in my case probably the right solution would be to map the store procedure to my entity remove operation, to remove the entity and the entire related object Graph. See Map Modification Functions to Stored Procedures
Many thanks to Danny Simmons for his answers.

C# .NET Immutable properties for Entity Framework, serializable classes, etc

An easy way to make properties immutable is to have private setters and pass the data as constructor parameter. Because entity framework needs a parameterless constructor, you need to provide oune, but you can make it private:

public class Immutable
   public string ImmutableProperty { get; private set;}
   // Parameterless constructor required for Entity framework serializer (and many other serializers too)
   private Immutable(){}
   public Immutable(string aData)
      ImmutableProperty = aData;

This is the easiest way to achive immutability using Entity Framework, and make posible to create Value-Objects (see Domain Driven Design by Eric Evans)

Ingredientes de un modelado eficiente

  1. Enlazar el modelo y la implementación. El prototipo inicial debe establecer este enlace y debe ser mantenido por las iteraciones siguientes
  2. Cultivar un lenguaje basado en el modelo. Tanto el experto en el dominio como el desarrollador deben poder describir el modelo sin ambiguedades,  entendido sin ningún tipo de traducción
  3. Desarrollar un modelo rico en conocimiento. Los objetos tienen comportamiento y hacen cumplir reglas. El modelo no es solo un esquema de datos. El modelo debe resolver problemas complejos. Captura conocimientos de varios tipos
  4. Destilar el modelo. Se introducen conceptos importantes y el modelo se vuelve más complejo. Tan importante como añadir conceptos importantes es eliminar conceptos que no resultan útiles o centrales. Cuando se encuentra un concepto no necesario atado a uno necesario, se encuentra un nuevo modelo que distingue los conceptos esenciales, y el anterior modelo puede ser descartado
  5. “Braninstorming” y experimentación. El lenguaje combinado con diagramas y una actitud de “brainstorming” convierte las discusiones en laboratorios del modelo, se hacen docenas de experimentos, variaciones, que pueden ser ejercitadas, probadas y juzgadas. Asi, a medida que el equipo avanza por los escenarios, las expresiones habladas rápidamente pueden probar la viabilidad del modelo, ya que el oido puede detectar o bien la claridad y facilidad o bien la “fealdad” de la expresión

Extraido de Domain Driven Design por Eric Evans