03 December 2016

DotNet Core : The good, the ugly and the just started

I just started out playing with .NET Core on my own time. Getting the feeling for the technology, trying to learn the basic concepts and getting some basic programming tasks done.


Working in .NET for the last eight or so years I really find this one much more different then the usual flavor provided by Microsoft.

Lets be honest, on most .NET projects I worked the MacDonalds style of programming, where 80% of the problems are solved easily by spending 20% of your project time while you spend the remaining 80% dealing with the remaining 20% of the problems which incidentally are what the the project is about. Go figure.

With this first dotnet core project I spend my time dealing with something completely different. It actually reminds me of the time I was learning to code in PHP and Python (it was years back during my Linux times).

The first thing that struck me was complicated level of setting up a project (project.json configuration) and linking all projects in a solution together and then managing all the different execution and deployment environments. Whaat?

But hell, its fun to deal with these type of issues.

Most of the help I got didn’t come from the actual documentation. It was mostly useless, so I just relied on googling for solutions or going directly to the dotnet core repository and looking trough the source code.

The second biggest problem was that there was not a solution wide way to build all projects and run all tests. Well, that was easily solved by using the custom Task functionality of VSCode.

Something like this:


And then in the root of the solution writing this small batch script:

And then with a simple, and familiar CTRL+SHIFT+B , voila:

it builds everything and it even runs all the tests.

The command line interface is simple and easy to understand, and once you get past the project.json madness you will have a literally very powerful platform to work on.

The coolest thing it is that the project is completely open source so well if you find a problem or have a need, the solution is several hours and a pull request away.

Of course I don't see .NET teams jumping all over it right now. I haven't yet tried developing in .NET Core with Visual Studio but I guess as with every version 1.0 of a Microsoft Technology the tooling will suck long time (usually we have to wait till version 3.x till we get the maturity level high enough to develop normally). But hell, I could be very wrong.

Still its a cool thing to play with. If you are curious what I did so far please look at my GitHub profile:

29 November 2016

Being a software developer without a University degree

I'm a software developer without a university degree. My family couldn't afford to keep me at the university past the second year and to my misfortune I was not ready to strike out at my own.

I returned to my home town and started working in my fathers small company. It was a three person team. I was working there since I was thirteen, on and off during every school break. Now it was permanent. Or so I thought.

I always wanted to become a software engineer. Since I was a little kid and my father smuggled a Commodore 64 from the Netherlands into Yugoslavia. My mother, who spoke a little German, helped me to learn BASIC programming at the age of nine from a Dutch manual. By the age of eleven I was writing a combination of assembly and BASIC and writing my own, albeit simple arcade games.

If I could not get a proper education I decided to get one by myself. Luckily for me I was working in an industry that is fully internet driven, and all the resources I needed to educate myself were available online.

I bought a book and taught myself PHP. I wrote a couple of in house projects for my family and after a year I got my first programming job. Well the rest is history. I ended up having a good career so far. It had its ups and downs but so far I managed to keep myself employed and my family provided for.

Not having a university degree in an IT related field is one of my biggest regrets in life.

When searching for new work some of the positions I applied to would not include me in the phone screen part of the interview process, some would never respond to my application.

Regardless of not having a university degree I managed to build a career in software engineering.

I'm proud of my achievements. I self educated myself. It was hard and I still work on it.

How did I do it?

  • Find out what are the core skills a software engineer needs to have and train your self in them
  • Spend time daily (ideally one hour or more) focusing your self on the core skills
  • Read, listen to IT related media.
  • Rinse and repeat
I think for the core elements of software engineering there is no better alternative then to buy and read books about the topic. Something which is considered core doesn't change that often and if it does skills learned can be easily be transferred to the new version. Alternatively reading documentation (for example PHP always had excellent online documentation), watching presentations and tutorials (InfoQ is my favorite source about anything software engineering). 

I think if you want to be an excellent software engineer you cannot get away from spending one or more hours a day improving your skills, be it programming, reading or listening. You have to do it. Your job will provide some but not all the skills you need to advance. 

Yes, you will be tired. So I am, in the evening, but not in the morning. So I getup earlier and spend time learning while I eat my breakfast. I listen to podcasts on my way to work, read books in the tram or when I'm waiting for something. 

Chances are you will always find gaps of time you can fill with some learning aspects.

Sitting in front of a computer and coding is critical. There is no alternative, the only advice I can give is sit down and do it regular. Plan for it and do it. Nothing worthwhile doesn't come easy. Being a good software engineer is sometimes a hard life, even if you love doing it as I do. 

I love listening to podcasts, .NET rocks and Software Engineering Radio are my favorite. Years ago when I was working in Croatia I was burning CDs with downloaded podcasts and listening to them on my way to and back to work. It was the best of times. 

Nowadays I use my Safari Online subscription and read books while I trudge in a tram. 

The most important thing

Be positive and keep striving. Life and the world will keep building walls around you, there is no need to build them yourself. You will hit obstacles and dead roads. But I firmly believe if you keep working and doing the hard things you will achieve what you need to achieve. 

20 March 2015

The only constant is change

Software development is not an exact science.

Let me repeat that.

Software development is not an exact science.

During my career I've never had the same team, the same project and the same business context twice. Although I learned from all those experiences, and I've extracted principles and rules of thumb each new company, team and project is a completely new experience.

In the heart of all modern software development processes lies the feedback loop.

Try something. See how it goes. Change based on the feedback.  The hear of software engineering lies in the continuous usage of the feedback loop on all level and dimensions of a software project, from the individual coder, from the dynamics in the team, from the project management standpoint and from the interaction with the client.

When an engineer understands this simple fact all the rules, all the processes pale to the simple fact that their are all just tools used in the service of maximizing the gains from the simplest principle of the feedback loop.

Governing a software project there are multiple forces. The team that builds it, the business that sells it and the customers that buy/use it. The huge mental strain that goes into building a software product and its plasticity render it completely impossible to fathom at the begging, to set it hard and simple to build as a hause that can be designed by a single person then rebuild across multiple locations by less skilled people. But on the other side its plasticity render it hugely flexible, and prone to changes and adaptation.

A software product is never fixed. The people who build it are never constant. The people who want it are never constant and the constant the product operates in is never constant. Accepting that the only state a software team exists is a state of fluctuating chaos a consolamce of sort can be found that are never right answers and that the only truth is a process of continuous improvement and adaptability.

Change is the only constant in life.

Change is pain.

Change is growth.

To survive one must accept constant change.

22 February 2015

Are integrated collaboration solutions the future of the internet?

I hate signing up for services on-line. Just going trough the process of typing my username, email address, password, personal data over and over for all the services I would like to use makes me crazy.
Luckiliy with the the advent of oauth and other federation services nowadays I can signup form most services using my Google or Facebook account (the two I generally use mostly). One-click signup made my life easier.
A nice feature of using web services is the ability to reuse data, content and features from one service (e.g. Google Drive) within Another Service provider (Draw.io) and have changes to my work published to a group of people using a collaboration tool (Slack).
Most online service providers expose REST endpoints for almost every feature they provide. Some are notification features (reading statuses of artifacts) while other expose behaviors of the service (e.g. posting a facebook status or saving to my google drive folder or publishing a note on evernote).
Are we living now in the world of the programmable web where REST apis replace the old days of bash shell scripting in the unix command line for hackers, where a basic level of programming skills enables us to automate and integrate our online work life?
Tools like IFTTT  or Zapier provide simple to use integration commandlets between various services which is a nice start for that entire space. But looking at their API's they are really simple. Comparing them to all the various possible REST apis published by almost everyone the amount integration and collaboration possibilities are enormous.
Leveraging various cloud based solutions its trivially simple to set up a continuously running background service in the cloud which executes various types of checks and integration tasks on a regular basis. I would recon that for lean starups and small organizations one of the first automation priorities they need to execute is to automate all their online services and collaboration tools.
New organizations are not just bound by one service provider but utilize a wide array of services offered on the internet, all of which can either be used for free or very cheap. Adding users and connecting all those services in one go is where most organizations will leverage the ROI of utilizing all those solutions.

24 January 2015

Applying the Bosnian Model to the European Union

Bosnia and Herzegovina is a broken country. No questions about it. It was broken by the fall of Yugoslavia, but bearing the brunt of the Yugoslav wars and now by the separation into national entities. Its a place where latent conflict still lurks and which hangs together by a thin thread of heavy bureaucracy and legislature which is so prevalent in many lives of BIH citizens.

But there a silver lining. Bosnia is a micro representation of the European Union as a whole, a real life testing bed for European ideas and models. The EU has long and bloody history of conflicts,ethnic cleansing and old grudges popping up after five minutes of talking to a national of another EU country.

With so many languages and cultures living in the same place going about and handling business across the EU its not that easy. Especially the language issue. Lets face it , English is the lingua franca of the world and if you want to hop about the EU the only language you really need is English. It helps to know the local language but how can you expect a single person to be fluid in all the twenty something national languages of the union?

In Bosnia there are three main languages :

  • Croatian
  • Serbian
  • Bosnian
with a small caveat that those three languages are virtually identical. If you are a speaker of any of those three languages you can go and communicate with speakers of the other two languages without much issues. The grammar is mostly the same and the vocabulary is virtually 80% identical. But if I was required to write and speak perfect Serbian or Bosnian I wouldn't be able to. 

The solution is multilingualism. All three languages are equally official.In practice you can get official papers and fill official forms using any script (Cyrillic or Latin) and language of choice. The only caveat being that you need to stick to your chosen form.

On the EU level its a nightmare to work with the local government of the country who's language you don't know. In general you have to provide translations for everything. Its costs money, and its time consuming.

The solution for the Union would be to supplement the local language with English as the official one. You can seamlessly fill forms, sign documents and communicate with either the official local languages or using English.

As Europeans we all still to national oriented, we keep our cultures close and take it to personally anything that could endanger our perceived cultural values. But in order for the union to work we need to transcend our cultural boundaries, egos and national values and be programmatic about how the union works. Our system needs to enable the seamless protocol of ideas, people and business without introducing to much administrative, legislative and procedural overhead. Its bloody difficult just by it self to work on such a scale. We really don't need to make it more difficult.

11 January 2015

The layered architecture is broken

In the begging there was the command line , and before that the punching card, and before that we wired steam, brass, copper and electricity into simple calculating machines. Finally we wanted images to move and interact with us so the word brought us the Graphical user interface and we rejoiced in the ability to click, touch, rotate and drag things on our screens.

The systems we brought into the world talked with the operating system using POSIX commands, with each other by sending messages over pipes, tcp channels and finally using the HTTP as the universal communication layer.

The world of modern application development is a world shaped by two opposing communication forces:

  • our systems need to talk and have a meaningful relationship with humans
  • our systems need to talk and have a meaningful relationship with each other.
What lays in the between is the void we call our software, our business layer, our domains. Our work.

Why do spend so much time talking, writing, ranting, discussing, arguing about everything elese except how to best to build our main deliverable, the core of our application systems?

The reason is simple. Communication is hard. Communication is the biggest problem we humans have not yet successfully solved. 

As human beings we are tough the basics of communication, the rules governing our interactions, the principles of speaking and writing and scant else. We are left to our divise to learn the best way to communicate with each other in a variety of different context.

Software is built to simplify human systems. To facilitate interactions, processes and in the end to ease the way we interact with each other.

A modern application is not longer a calculating, mathematical system. Its a living, breathing communication entity which needs to properly interact with humans and other digital systems. It needs to be able to talk, to understand and communicate.

How we built our systems is all centered how those systems need to communicate with all other interested parties. The main architectural and development styles are all centered how to most effectively connect our main working deliverable , our logical part, with the outside world.

Nowadays its more easier to find content related how to build UI systems, application services  or persistent storage communication than to find content related how to build the core of our system. 

We all know to how develop and how to solve problems. 

We do not know the perfect way to communicate.

As an industry we are always trying to change the state of the art of  how we develop our software. In the mid twothousands we tried to escape spaghetti code applications where the application logic, external communication points (e.g. what our applications are exposing to the outside - UI, web services etc) and our internal communication points (database, web services we consume or other stuff our logical part needs to work perfectly) and thus we focused on the layer application architecture, the MVC design pattern.

A typical application architecture from the beginning of the millennium looked like this:

I present you the archetype layered application architecture. Our presentation layer communicated with our logical/domain layer who in turn used the Data access layer. On the sides we had our framework and other cross cutting aspects which pervasively transcended our layers.

It was a good model and a good starting point for all our applications then and nowadays. The problem was that it didn't work for all the scenarios we tried to do and in the end it was broken since we couldn't use this architecture to solve our problems. Thus the architecture of our solutions started to dilute, to be broken because it was really hard to stick to this process. We knew it didn't make sense for what we tried to do. 

Alistair Cockburn came with the  Hexagonal architectural model (or the "Ports and adapters") to better describe the architecture best suited for modern application (systems) development. The general idea is more aligned with hour our system ended out to be when have broken out of the layered application design. 

In the hexagonal architectural model a single solution is composed of the following component types:
  • One pure logical model
  • One or more external communication points (ports)
  • Zero or more internal communication point (ports)
  • One or more integration environments providing adapters for one or more external communication points and all internal communication points
The pure logical model is the working deliverable our system produces. It has the minimal amount (ideally zero) direct outside system dependencies needed in order to properly function. 

External communication points (ports) are the interfaces, the services, the information offering it provided to the world at large. In order to interact with the pure logical model one must go trough them. A pure logical model needs at least one external communication point, but it should provide more points grouped into different context based on the type of interaction and specif usage requirements of the outside components making use of the pure logical model.

Since external communication points are bundled with the logical model they are allowed to call and make use of each other , or better yet make use of a shared internal external communication point generalizing common outside communication patterns.

Internal communication points (ports) are required by the pure logical model in order for it to successfully complete its work. They are not required, it is possible that the pure logical model does not require services provided by other logical models, third party systems or persistent storage points. 

Internal communication points can called only from the logical model or by each other.

The combination of a pure logical model, its external and internal communication points is the first unit of reusability - a self contained unit.

In order to complete our system we need an integration environment. The integration environment is responsible for taking one or more self contained units and :
  • exposing one or more external points (ports) to different clients trough its adapters
  • providing adapters  for all internal communication points.
A single self contained unit can be exposed trough different integration points, the specifics of each integration point are not longer a core architectural choice but a plugin, something which can be changed and replaced. Thus the core deliverable of a hexagonal architecture is a specific integration enviroment serving a specific scenario or case.

The critical challenge related to the architecture of hexagonal solutions is the choice of integration patterns. And that is something we have been doing with our applications for quite some time. By applying the same principles used to build layered applications we can quite easily and more naturally build haxagonl solutions.

A specific technical example for a natural hexagonal architectural solutions is the Node.js Express web framework. Express is a very simplistic solution. In essence it provides only two things:
  • Routing http requests
  • Ability to work with Http Request and Responses on a high level
everything else is handled by introducing components called middleware, which include everything from error utilities, json serializes and deserializes, logging and our application code. 

06 January 2015

Measuring estimates in uncertain environments - another view on agile methods

I've spent the majority of my career working in the outsourcing business. Outsourcing its an exciting domain if you love technology, learning and are an adrenaline junky. Luckily I'm all three, so I fit right at home.
Outsourcing is all about making the biggest return of investment (oh, well no big surprise there) by reducing as much the development time possible in order to satisfy all the key client requirements.
When approaching any outsourcing project you have the following guarantees:

  • You are never going to know everything you need to build the project
  • Some of the technology involved will be unfamiliar to you
  • You are not going to have enough time.
In order to succeed in outsourcing you need to have the following three characteristic:
  • A self measuring and self correcting software development process
  • Strong requirement gathering and managing skills
  • Will to learn outside of regular working hours
I like to call outsourcing projects , "The big unknowns" or "Chocolate boxes" (like you never know what you are going to get). As you may fathom, knowing how much its going to take to complete a project is something of a critical information. 

In the classical, bad example model, the evil overlord of the project pressurizes the team to give the shortest estimate on the delivery which is then turned into a strong commitment. 

Notice that I'm talking about pressure here. Mark Horseman from Manger Tools talks about three different powers in an organization:
  • Role power
  • Technical power
  • Relationship power
the pressure in the above sentence is related to the abuse of the Role power, e.g, the might which lies on you by virtue of the job you are doing and its bestowed and enforced by the organization. The principal issue with the use of role power in any engineering organization is that you are dealing with intelligent, educated people working on a project which requires constant high order mental exercises day in , day out. 

You generally can't treat software engineers like they are factory workers, or farm workers where their outputs can be reasonably measured and compared and standard benchmark of operations is actually derived the lower level ability to execute a set of manual operations repeatedly. 

On the other hand you really can't let the cats fly out and what may come will come which is also the donwside of working with smart, intelligent and educated people. They tend to stray, a lot if left alone to much. 

Here we are talking about a case where estimates are forced on a team that has a low certainty level of actually delivering on them. In practice I've seen several different outcomes of this behavior:
  • Lots of overtime
  • Lots of technical dept
  • Lower quality of the delivered solution
  • Lower maintainability of the code
to give a summary we have tears and pain to spread around. Yes, yes, we all know that and we all have lived trough those moments.

And here come the agile model. The promise of freedom. Of actually delivering software that works where estimates are estimates which are actually handled properly where we are not committing to a whole bunch of work and doing a whole bunch of things at once without heads or tail where we talk to our customers who are actually part of our teams. Hurray.

By now we are all agillists and we all work in agile environments. We all know SCRUM, extreme programming and the Kaizen of Kanban. A burndown chart is not a wierd excel graph type but a living and breathing artistic expression of our ability to draw on a white board. 

From all agile practices I would presume to point one which I find most valuable (yes I know agile practices can not be taken alone, they must be bunched together, but bear with me), the built in control, measurement and correction mechanism provided by standups, sprint scrums, retrospectives and other "lets huddle around and see where we are at moments".

Lets call all those practices with the same name, lets say - PING. A "Ping" is a regular query to gather measurements about the status of the project. The only difference between each Ping implementation is the time and work scope each ping covers. Daily standups cover well one day of work so their scope is smaller, while sprints and retrospectives each cover a bigger time period.

So why is pinging so great? The greatest benefit of pings is the ability to gather real, actionable data about the status of the project without introducing any pollution in the data set. We are actually measuring what is. And when we have done a couple of such measurements we can derive the current speed of the project. And when we change something in our project or development process we can measure how much the speed of delivery is going to be affected by collecting data from our pings.

I always picture a software project like an organic machine, with gooey parts sticking and intricating them selves. The machine moves but with all the gooey parts you actually can't now at the beginning until it started moving what its speed is going to be. So you start it and it goes slow. You change something and then move it again with periodical check ups to see how its going, 

With this analogy I generally think the best approach is to pick a time frame (short one) a chunk of work and start the project. Until we have gone trough a couple of weeks with regular measurements we are not going to be able to effectively gauge the speed of the project and thus correctly estimate its length and our delivery capability.

Nothing new here, its just the old agile promise everybody is talking about. For me agile its not continues integration or test driven development, planning poker or user stories with neatly arranged burndown charts.

Agile in its essence is the acceptance of the intrinsic chaos and fallibility that is software development where each development effort is generally compromised by wild guesses, blind luck and leap of faiths. Agile accepts the chaos that is our industry, and instead of trying to change it, to mold it into well ordered bits it rids the wave and leverages its strength to deliver value to our customers instead of mounds of stinking shame. 

In my experience the best mechanism to gather an measure the speed of team is to regularly monitor it by just querying the regular status without expressing judgement or implying pressure. The results are always going to be