Sunday, June 21, 2015

Final Impressions of NDC

NDC was really an awesome conference and I got to meet some really interesting people there. I was satisfied with most talks I attended and got some new influences.

As usual this conference struggles with inequality between the amounts of male to female speakers. Some will argue that this is just how things are since the conference call for papers are open to everybody. I don't really think this is an excuse since we our self are responsible for building the society we want to live in and that might mean initially putting some effort into creating equality. There was a code of conduct for the conference which is a step in the right direction.

The venue at NDC was really nice, the food was great and, some really excellent coffee was available. All in all I’m really happy I got a chance to attend. All the talks will be available on Vimeo.

Friday, June 19, 2015

Mob Programming – A Whole Team Approach by Woody Zuill

Another concept that I’m completely new to, NDC is fantastic source to for getting interested in new things.

Mr Zuill starts the talk describing what Mob programming is: All the brilliant minds working on the same thing, at the same time, in the same space on the same computer.

During a day there is one driver who sits at the computer and the rest of the team is navigator. The driver inputs what the navigators explains. Anyone can take a break when they want to. If the driver wants to express an idea then that person needs to switch. The driver gets rotate on fixed cycle.

Benefits are instant review of both code and design, good learning opportunities, and, team resiliency. To work this way focus should be paid to individual and interactions. The team should exercise kindness, consideration, and, respect and make each other aware when they fail in these areas.

Productivity remains high due to low latency in communication and almost no technical debt.

Mob programming certainly seems like an interesting technique, excellent talk. There is a website for mob programming.



How Do You Scale a Logging Infrastructure to Accept a Billion Messages a Day by Paul Stack

This seminar is a description on how Paul Stack changed a legacy solution using SQLfor storing logs to a more scalable solution.

To replace the SQL solution they went to the ELK Stack which is made up of ElasticSearch, LogStash, and, Kibana. In the beginning a developer took a single JAR file and ran it, this worked well but had some stability issues.

Next iteration Redis was used as transport medium for the logs before it was pushed to ElasticSearch. Due to problems with Redis overflowing they moved to Apache Kafka. Kafka is a high-throughput distributed messaging system and is backed by ZooKeeper which is basically a key-value store.

The following iteration they tried to improve the server structure however this mostly increased the complexity of the system without improving the system. The system could new handle the peak load of 12 billion messages per day.

Mr. Stack recommend reading Jepsens database series for choosing technologies. A lesson learned from the project is to not re-invent the wheel, the simple solution would have been to buy Splunk however during the development they learned a lot about their system.

I’m not a such deep into DevOpts but I found this talk very informative.

Thursday, June 18, 2015

Impressions day 2 of NDC

James Mickens closed out day two with his awesome standup routine Not Even Close: The State of Computer Security. The rest of the evening is a party with some live bands.

Today has been really good and even though not all the seminars I attended will be applicable I feel that I got a lot of inspiration.

The food is pretty good considering that it is catered and there are several selections to choose from. The only complaint I have is that there is no dedicated slot for dinner so you either choke down your food in less than 20 minutes or miss a seminar slot.

All in all I think I can say that NDC is definitely worth a visit.

Ten Simple Rules for Creating Your Own Compiler on the CLR by Philip Laureano

Don’t we all secretly dream of constructing our own language?

The good news is that everything does not need to be written from scratch. The extremely quick run trough of a compiler is that the code is parsed into a parse tree which is converted into an abstract syntax tree which is compiled to an assembly.

Don’t write your own parser, use a prewritten one for instance ANTLR. When constructing the grammar rules it is important to avoid circular loops since that can cause an infinite parsing recursion. The parser that is generated is quite unreadable.

Going from the parse tree to the abstract syntax tree requires high usage of the visitor pattern. The abstract syntax tree then forms the heart of the compiler. Mr. Laureano has used LINQ expression trees to handle the abstract syntax tree.

Mr Laureano’s toy language, Not Quite Lisp™, is available on GitHub.

I wish I could say that this talk taught me all I needed to construct my own language but alas it turns out I’m either to dumb or the problem is more complex than an hour’s seminar can explain. It was fun to listen too though.

Making .NET Applications Faster by Sasha Goldshtein

Any person who does not care about performance of their components is a horrible person. Shun that person like the plague. The seminar focused on three areas for improving performance:
  • Working with collections
  • Improving startup times
  • Reducing GC pressure
For collections running times and space complexity needs to be considered, however internal structure as well as specialized functions must also be taken into account. The basic .NET collections are Array, List and, LinkedList. Arrays are static in size but fast, Lists are dynamic in size but adding elements anywhere else than at the end is costly, a LinkedList in cheap in insertions but requires a lot more memory space which affects performance when going through all elements.

Another collection group is trees; examples are SortedDicitionary and SortedSet which have efficient lookup but they have an even bigger space requirement per item than linke lists. There are also associative collections such as Dicitionary and HashSet which is somewhere in between LinkedList and Arrays in space requirements.

There are use cases when the existing collections does not fit well, for instance finding words that begin with a specific prefix. Structures that handles this kind of scenario are called Tries.

The key point is that for certain scenarios using a custom collection can boost performance.

Startup can be separated in cold startup and warm startup. Cold startups are when the application has not executed since last reboot, this startup time is dominated by disk I/O operations e.g. loading assemblies and similar. For warm starups JIT compiling and similar is the biggest problem

To improve startup time NGen can be used to compile the .NET code to native code which will speed up load times. Another trick is to enabled Multi-Core background JIT by using ProfileOptimization.SetProfileRoot and ProfileOptimizations.StartProfile. Another solution is to use the faster RyuJIT compiler, however it is not released yet so it might not be 100% stable. It will be part of Visual Studio 2015 and .NET Framework 4.6 release.

To improve cold startup all the assemblies can be merged together and this can be done with ILMerge. Another approach is to use .NET Native that compiles everything down to a single native executable. This removes the need to having the .NET Framework installed on the target machine. However .NET Native is only available for Windows Store Apps.

For optimizing GC handling there are many performance metrics:
  • Performance Counters
  • Event Tracing for Windows
  • Memory Dumps
A simple technique to get better performance is to use Value Types where applicable. Value Types are smaller than reference types and are embedded in their containers.

Finally: don’t use finalization.

This has been the absolute best for me so far at the conference

Short Between Seminars Post

Something that keeps beeing mentioned on the conference but I have not brought up in my other posts is Conway's law. There is also several references to the book The Inmates are Running the Asylum.

Loosely Coupled Applications with MassTransit and RabbitMq by Roland Guijt

One thing that is becoming abundantly clear during this conference is that SOA, CQRS, and, micro services is the new black. At the center of this you always find the service bus; MassTansit presents itself as a light weight component to fill this functionality. This is a open source project and it is available on GitHub.

This is a more practical session with examples that does not translate well into the blog posts I’m creating for this conference. I will try to express the key points of the presentation. The code that was demoed is available at GitHub.

As stated in some of the previous post there is a threshold of complexity of the application for the benefits of micros services to be useful. The usual draw backs with monoliths are presented, such as that it hard to allow teams to work in parallel.

The big difference between NServiceBus and MassTransit is that NServiceBus has more features and comes with support but costs money. MassTransit does not cost anything but has no support and lacks some features. MassTransit supports several transports such as RabbitMq and MSMQ.

Sagas in MassTransit are used for correlating services to work concurrently on incoming messages. Useful for long running processes.

This seminar was somewhat like having a blog post read out loud to you. I certainly learned some new things however I was yawning a lot while doing so.

SOLID Architecture in Slices not Layers by Jimmy Bogard

I have been a long time reader of Jimmy Bogards blog posts at Los Techies so it feels kind of awesome to see him live.

The seminar begins with a case study of a real world scenario that had become horribly complex. This leads into a discussion about the recommended n-tier structure of a system where everything is organized in layers.

By developing independent layers you run the risk of over-engineer the layers interface to ensure that the next layer has all the access points it needs. By slicing vertically through the layers and finishing each feature through all layers the interfaces area of each layer becomes exactly as big as it needs to be.

To create good separations Mr. Bogard suggest using the CQRS pattern and equates how this is similar to POST and GET for the web. To demonstrate this he digs into Microsofts ContosoUniversity example which he has re-done and is available on GitHub.

For modeling request a structure where input goes into a request handler which in turn produces output. To model these request Mr. Bogard have written a tool called MediatR which is also available on GitHub. This tool is also useful for constructing commands.

A new structure for structuring the project files is suggested where features a grouped together. Here it is suggested to create a class for the specific feature and internal classes for all needed components for that features, like commands, validators, and similar. I’m a strong believer of putting classes in their own files so I’m not so keen on this advice.

Jimmy Bogard does not take the normal approach of explaining SOLID principles where each principle is discussed separate whit a small example. Instead he dives into talking about architectural design decision with little to no reference to the SOLID principles. The actuall references comes at the end where all the architectual decisions are tied to the principles. This was a very nice way of handling SOLID.

What’s new in Visual Studio 2015, ALM + ASP.NET 5: Next Level Development by Adam Cogan

This probably the longest title for any of the seminars I’m attending. I have only tried an early beta of Visual Studio 2015 a while back so I have been looking forward to getting some more information on the “almost” stable release.

The interview starts with some history of the tools that has been introduced with previous Visual Studio versions. This leads into five things that Mr. Cogan recommends in Visual Studio Online, they are:
  1. Monaco Editor
  2. New Build System
  3. Scrum and Kanban
  4. Git Pull Requests
  5. Application Insights

The semeinar contiunues with going through some updates for Visual 2015

Roslyn which is the new compiler written in entierly in C#. The Roslyn compiler allows for some new interesting tools like CodeAuditor. Roslyn also comes with some new language operators (C# 6.0) such as the null propagation operator .? which allows for checking for null before executing a function on the object.

Evaluation of LINQ expressions in runtime so that the result of a LINQ can be viewed.

Some additional tools that can be used to assist the development process are NR6Pack and CodeCracker.

Code Lens will be available in the Professional version.

Smart Unit Test which is the new name for PEX.

IntelliTrace will be faster and give more information.

In the end the presentation became quite rushed and Windows 10 was brought up as a good update for Visual Studio 2015 which was a bit weird, however, Adam Cogan is a good speaker and kept the talk interesting and interactive.

Continuous Delivery for Architects by Neal Ford

I was late to this seminar due to talking a little too long about NServiceBus with Sean Farmar at Particular Software booth. When I came into the seminar a Continous Delivery book was on the screen and the speaker was way beyond the introduction. Two other books are shown during the talk, the classic Domain Driven Design by Eric Evans and Implementing Domain Driven Design by Vernon Vaughn.

Mr. Ford talks about how a continous delivery pipeline must be constructed with steps such as commit, functional test, user acceptance test staging and so on. This increases confidence and gives feedback at the different stages. There are no rules for constructing the pipeline but it is up to the architect.

Mr. Ford states that today’s best practices will be tomorrows anti-patterns and goes on to show that the increasing complexity of problems drivers architectural change. The most certain thing is that the code will change and the design should make that simple. The talk continues with tackling some of the problems for constructing the component pipeline.

During the lecture only two tools are mentioned Structure 101 and NDepend and they are used to analyze the dependencies in the code base. The reason these are brought up is to avoid creating dependency cycles. Structure 101 is for java and NDepend does somewhat similar job for .NET. Diamond Dependencies are also a anti-pattern that complicates constructing the pipeline.

Mr. Ford continues the seminar talking about domain driven design and some of the problems with building a micro service design. He brings up the Fallacies of distributed computing as an example of the problems that must be considered when moving form a monolith to a micro service design.

A pretty intense talk considering it was the first one of the day. Mr. Ford speaks well and moves quickly between the topics, definitely worth the visit.

Wednesday, June 17, 2015

First day of NDC Oslo impressions

The first day has been quite intense, lugging around an extra hoodie (conference present) and thick brick of a book has not made things easier. The decision to blog instead of taking notes has also proven to be a bit of an extra burden. My laptop battery gives me two hours of work at best so I’m always hunting for a power outlet, some conference rooms has them some do not, let the games begin.

Most of the seminars have been pretty good, however, since they are only an hour long it does not leave any time for examples or more in-depth discussions. Even though the presentations are on the short side I think it has been really interesting so far and I’m looking forward to tomorrow.

Applying S.O.L.I.D Principles in .NET/C# by Chris Klug


The presentation was changed slightly from the topic and will deal with when to and when not to use the SOLID principles. The presentation goes through the SOLID principles which has already been written about in a million blogs so I will not go into to it here. Each principle comes with an explanation where Mr. Klug would use the principle.

This seminar becomes an in-depth description of the speaker’s usage of the different SOLID principle with his motivations. The motivations are not well founded however it does not feel like it adds something more to the already existing body of work on SOLID.

To NoSQL or Not to NoSQL, That is the Question by David Ostrovsky

We are moving to a polyglot persistence world; however relational databases, in my opinion, should not be overlooked as a solution. David Ostrovsky talk is aims to orient you in the world of NoSQL databases and assist in making a good choice for your needs.

The seminar begins with a run through of a user story for a company that wanted to update to a NoSQL database where the solution end up not being optimal for the company needs. Mr. Ostrovsky, with all right, criticize the term NoSQL since this is a very rough separation. Below is the groupings that were presented in the seminar with short descriptions.
  • Key-Value
    • Simple collection of keys and values, very fast and scalable. 
  • Document 
    • Similar to key-value but the keys are known to the database so they can be searched and operated on.
  • Row
    • Similar to a RDBM database but with row partitioning with a id for each row.
  • Column
    • Turns the storage so the column information is stored sequentially on the physical disc. 
  • Graph
    • Uses graph structure to store data which allows for graph theory to search the data.

Using SQL has many advantages and many SQL flavors today support sharding and clustering. It has also been around for 40 years and it easy to find developers for SQL applications. However if the data structure is very complex or the dataset starts becoming so large that you have to give up functionality such as indexes and joins Mr. Ostrovsky suggests that it is time to look at NoSQL solution.

A use case of storing tweets on a row database is discussed, the row database is a good choice since it has very low latency for writes. This leads in to a run through of the CAP theorem. Since network partitioning cannot be controlled it basic comes down to selection if the database should have high consistency or high availability.

The futility in benchmarks is pointed out. The best comparison is to model the actual use case for the different databases and also populate the databases with this model and measure that.

The seminar ends with an example of using Cassandra to store tweets as mentioned earlier. For those not versed in the world I would think this seminar would be a excellent primer, however an even better solution would be to read NoSQL Distilled by Martin Fowler.

.NETCore Blimey! By Matt Ellis


Matt Ellis steals the lead in the unofficial best t-shirt competition, currently running in my head, with his ‘I see dead code’ t-shirt. Mr. Ellis begins with presenting what .NET Core is
  • Open Source
  • Cross platform
  • Standalone
  •  Factored for modularity
  • Everything ships as NuGet packages
  • Not finished….

It allows for installing the parts you need for the application and not using the entire framework. .NET Core is a fork of the .NET Framework and there are missing pieces like AppDomain and Remoting as well as all UI parts.

Mr. Ellis demoes the usage of DNX for handling multiple installs of .NET Core followed by how to get the .NET Core from Github and compiling it locally and running a simple hello world program. This demo leads into CoreFx which are the foundational libraries of .NET. To get the hello world example running it required to use an AppModel which bootstraps the CLR loads the .NET exe and runs it. There are several .NET Core AppModels consolerun or corerun (which you get when compiling .NET Core yourself), DNX, and also one for Windows 10.

By using NuGet you can get away from a static framework version. You specify the packages you want and what versions. There are some new monikers to specify dependencies and the most useful one is dotnet. However this is not completed yet and may be subjected to change.

What will be the implications for Mono? Mr. Ellis suggest that there is probably room for them both since Mono is already cross platform and focuses is Non-Windows mobile platforms.

In comparison with the Don’t Make Me Feel Stupid talk this seminar is highly information dense and I would probably need to hear it twice to get all the details. It was very interesting but since this very early may things my change

‘Don’t Make Me Feel Stupid’ – A User by Liam Westley

The first keynote where the speaker is blasting some music (New Model Army – Stupid Questions) while waiting for the seminar to begin. I have design some UI in my life however I have not giving much thought to UX which I why I chose to attend this seminar.

The presentation is somewhat abstract with real world examples describing the points that Mr. Westley wants to make, which is really nice but quite hard to properly relay in a hurried blog post. I will try to capture the main points in a bullet list below:
  • Allow users to use multiple paths
    • If there is no way for the user to achieve a goal they want, the user will find a way.
  • Don’t ask users unnecessary questions
    • If you cannot fulfill a user answers then don’t give the user the possibility to answer the questions in that way.
  • Don’t supply to much information
    • Too many confirmation dialogs trigger auto acceptance
    • Be careful with ‘geeky’ terminology with the information presented to the user.
  • Too little information
    • Too little information causes people to navigate further through an application, or forces people to think.
  • Consistency
    • Don’t make UX changes on each release.
    • Ctrl+F in Outlook forwards the e-mail instead of search (lol).
    • Being consistent to the application or the platform is not cut in stone, this must judge carefully.
  • Surprise your user
    • Help the user to make the right decision or correct the input for the user.
  • Get friendly with your designer
    • Simplifies communication.
    • Developers should question design decisions.

This was a very nice seminar, but not particularly information dense.




Microservices,Cutting Through the Gordian Knot by Ian Cooper

First off, if you have a beard like Ian Cooper you have my respect. Secondly, as stated in a previous post, I am not an expert on micro services so I might not do these presentations the justice they deserve. The points that were discussed in the talk are:
  • ·         Managing Complexity
  • ·         Implementing Micro services
  • ·         Breaking up a Monolith
  • ·         Operating  Micro services

The presentation started with a historical run through of the problems of scaling the team size as the complexity of the software increases. To handle the problem of complexity Mr. Cooper went through decomposing the code and create loose coupling. The decomposition was first described through sub-routines and then discussed through a layering architecture. The benefits of a layered architecture are of course the decomposition and the loose coupling as well as high cohesion.  

At low to medium complexity the monolithic component design is suggested to be used. However when the complexity of the software becomes so great that it is desirable to use multiple teams that can be work in isolation Mr. Cooper suggests that it is time to break up the project in smaller components. Historicaly this has been done through RPC calls but Mr. Cooper goes through all the problems with using RPC style communicating for interacting between the components.

The initial discussion of design evolving over the decades culminates in the introduction of service providers. In the seminar the service is defined by:
  • ·         Autonomous
  • ·         Explicit Boundary
  • ·         Is a Bounded Context
  • ·         Is a Business Capability
  • ·         Eventually Consistent
  • ·         Decentralized Governance

Following the definition of the service the size of the service is discussed. To highlight this the drawbacks of an monolithic component are compared with the problems of  a swarm of “nano” services.


Due to the time constraint of the talk the presentation of breaking up a monolith and operating micro service became quite rushed.  Even though the end of the talk became quite hard to take in since it was a lot of information in rapid succession I found the seminar to be quite good.  

Duplicating Data or Replicating data in Micro Services by Dennis van der Stelt

I’m a bit late to the micro service party and am not read up on many of the concepts of micro services so some of the points of the seminar might well have gone over my head. With that said I don’t feel that the title of the presentation was thoroughly reflected in the talk. The point seemed to be that data should not be duplicated over service boundaries but replicating data within service boundaries is ok to solve problems such as performance optimizations, optimization of queries, resolving latency issues, and resolving network issues. The data that should be duplicated over service boundaries are state changes, identifiers for what the request relates to, and date time info.

The rest of the talk was a mash up of what monolithic and micro services are, publish-subscribe for messaging, and a brief show case on how NServiceBus can be used to resolve monitoring issues. Both the positive and negative parts of monolithic applications and micro services was discussed which I thing is great. There was a short example of micro services that used Uber as an example. During the presentation the SOA Patterns book was recommended as a good read.

During the presentation there was a small problem with the audio going out however this was quickly and professionally resolved. Overall I think it was a decent seminar however I think it could have been more focused.   

Keynote: Data and Goliath - The Hidden Battles to Collect Your Data and Control Your World

Bruce Schneier is an excellent speaker and the topic of how we produce data as an exhaust in today’s society is extremely relevant. He talked in detail about how governments and company’s today analyze our every move and we are happily giving up this information by carrying around personal surveillance devices (cellphones).  He also pointed out that we share one world and one net and there is only one solution, either everybody gets privacy and security and it is built into the backbone or no one gets it.

In the end Mr. Schneier makes states the dualism of sharing data. We want information about traffic jams by sharing location data while driving. It would be incredible useful if we pooled all medical records and let researchers loose on it. However, this is incredible personal data and what we decide should be shared and how we managed data is a subject that we will be judged on by coming generations.

Initial Impressions:

I’m standing in front of the main stage waiting for the key note by Bruce Schneier. I’m already regretting being a huge nerd and bringing my copy of Applied Cryptography since it feels like a lead brick.  However it was completely worth since I actually got it signed.


The conference floor has some software firms present but not so many that it feels crowded. There are approximately five hundred places to get a coffee.  The venue feels room however it is still early so I might need to revise that statement by FridayI’m standing in front of the main stage waiting for the key note by Bruce Schneier. I’m already regretting being a huge nerd and bringing my copy of Applied Cryptography since it feels like a lead brick.  However it was completely worth since I actually got it signed.

The conference floor has some software firms present but not so many that it feels crowded. There are approximately five hundred places to get a coffee.  The venue feels room however it is still early so I might need to revise that statement by Friday.

Tuesday, June 16, 2015

Attending NDC Oslo

So nothing has happened on this blog for a long while and perhaps it is time to do something about that. I’m attending NDC in Oslo for the first time this year and I thought that this might be the right time to start up the blog again.

I will try to give my impressions of NDC for the next couple of days and write some comments on those seminars I attend. My focus will be mostly on .NET however there are several seminars that I will attend that focuses on design, UI, and similar more general topics.

Of course this blog will reflect my opinions of the conference so your mileage may vary.