I want to recommend you a book that I just finished reading — .

If you are leading some field, a manager or want to be a manager, you must read it.

Ownership examples taken from navy seals trainings which can be easily applied to real life challenges in any company. Combining true battle stories and practical implementations.

Highly recommended!

There is like thousands of post talking about what is Docker and how to start working with it, but there is no much posts explaining simply the motivation behind it. So I decided to write one, as I see it.

I think its much more important to understand why than technical details of how. When you understand why, you have the motivation to learn how.

OK, let’s remind ourselves first about the old habits of how we used to work. This to get a better reference of the problem.

We used to create dev/test/prod environments manually. It could work when your team and environment are small. When team grows and you have to share those dev/test environments, things start to go out of control. Such environments aren’t isolated and become inconsistent over time.

In many cases, those environments are not identical to production environment, running on different machines or have different network/security conditions. Which all resulting in unexpected bugs on production.

When team and environment grows, engineers can’t keep up with constant changes and additions of new components. It become impossible to install manually on your local machine another dependency. This leads to creative solutions like using some API/cache/db from another environment. 💩

This is where Docker will shine by turning your environment services infrastructure into code. Which will be predictable, repeatable and consistent between your engineers dev, your CI test and your production environments.


Let’s start with Docker, which is the basis for everything. It takes some time to grasp the concept and how to work with it, like with any abstraction. But believe me it’s worth it.

Docker is using Linux core without need of creating virtual machines for each container (service). As a result the best option as developer machine would be Linux. Docker can run on Mac as well, but will require more resources to create Docker virtual machine space. Sorry, I can’t provide any feedback regarding working with Docker in Windows OS as I never tried it. (If you did, please share your experience in comments)

What does Docker container represent? A container is a single instance of one component in your environment. Component can be for example web application, API, worker, cache, database, or others.

What you define in service Docker file is which OS image to use as your baseline, how to enrich it with your service dependencies, where to copy your code from and how to run the service.

Docker Compose

Docker is nice when you want to spin up a single app/service. What makes it really powerful is Docker Compose which creates a composition of Docker containers that eventually represent your real application dev/test/prod environment.

Docker composition describes the relationship between your services. Their volumes, networks, links, configuration, etc..

It allows you to spin up the whole sandbox environment of services on your machine. Fully isolated, your own environment. ☺

I think that’s all you need to know to get the motivation to learn how to build Docker containers and work with Composition.

If I could explain it simply, as next step, I suggest to follow Docker documentation.

Good luck!

When you are like me working in a team of web engineers, on a project which involves usage of external 3rd party libraries and you are using npm and/or bower to manage those dependencies, you probably will find it useful.

The problem:

When you are working in a team and you are not keeping the 3rd party dependencies as part of your repository. Someone decides to add/remove/upgrade some dependency, they modify package.json or bower.json files reflecting the change and submits the change into repository.

Now you pull those changes into your local copy of the project and unless you run manually npm install and bower install you won’t have those dependencies updated for you locally. Which in most cases result to build/run errors or inconsistency.

The solution:

As we are working with grunt, I wrote a small grunt task which helps detect and update dependencies if package.json or bower.json files version was changed.

So from now on, when someone changes a dependency and bumps npm/bower version, grunt task will automatically update every team member local dependencies.

Which eliminates notifications, questions and increases productivity. :)

Feel free to use it: 

If you are a web developer and you are using Grunt as your default task runner, you probably in most of the cases use it from some inner folder of the project, something like ”client/app”.

In this case, every time you want to work on your code using command like ”grunt serve”, you need to change directory into client/app to run it all the time.

Below, I show you the way to make grunt commands work from root directory which makes your life more comfortable, eliminating the need switching to client/app directory all the time.

  1. Add in your original client/app/Gruntfile.js file the following code to let Grunt know the base directory location:
module.exports = function(grunt) {
  // Must have in order to work from root



2.. Add symbolic link in your root directory to original client/app/Gruntfile.js file:

ln -s client/app/Gruntfile.js Gruntfile.js

3.. Create package.json in root directory with grunt as dependency and install Grunt:

npm init
npm install grunt --save-dev

4.. Run grunt from root to test it.

grunt serve

That’s it, pretty simple. Hope you like it and use it.

Before we begin, I just want to make sure that we are on the same page about what is “Test Automation”. For me, it is a way to automatically validate that a real user experiencing the predicted application behaviour. For example: User click on a button which should open a new window. In this case we need to automatically validate that user is able to click a button and a window will show up with predefined text. Wikipedia: Test automation

Ideally, I wouldn’t recruit a test automation engineer at all. I beleive that software engineers should be capable to test their own code (as part of ”eat your own dog food” concept) by writing unit, integration or end to end tests. It makes even a higher level of bond to the code that you are developing and need to maintain in the future.

But in reality, there are many reasons that might prevent you from living without QA position at all. It could be a culture of the organization that you are working in, legacy code that you need to deal with, could be the product nature that you are developing and etc… Moreover, you might need someone to own quality control in your organization.

My team was missing a QA person to take control of the product quality that we were developing and take ownership of quality all in all. We wanted to do 80% of the testing automatically while covering the manual in the left 20%. I had a budget to recruit test automation engineer of junior to mid level.

How do you recruit a good junior engineer? You are looking for a potential which you need to find and validate by testing it. So I composed a simple exercise for test automation position interview. I gave it as homework after meeting a potential candidate which passed the initial screening.

I thought that it would be a nice exercise to use some SaaS tools to check our company website contact form. I choose to use Saucelabs for test automation code execution in the cloud using Selenium web driver and Loader to do the load test.

I tried to make the test work myself, before I give this exercise for candidate, to make sure that it’s possible and to learn the field myself to feel more eligible. (I never did test automation before)

The main purpose of the exercise was to see how candidate can learn and deal with new tools, use them to write automation and summarize all his work in a document describing what and why it was done.

I sent the exercise as homework to few candidates asking them to send me results in 24 hours, including the test automation source code and document explaining the services and the work that was done.

Eventually I found this approach of testing candidates very effective. You could see weak candidates return partial test results or not returning results of tests at all. Strong candidates asked question and tried to accomplish the mission in the best way. Some even found bugs in our contact form by running their unique tests. :)

Hope you find it interesting. If you have any question, let me know.

I wanted to share few Linux bash (terminal) tips I found to be useful for me over the last year.

1. Commands

cd - goes back to previous directory you have been at. useful when you accidentally switch to root dir for example.
!! - repeats your last command. most useful if you forgot to add sudo to last command: sudo !!
fuser -k tcp/8080 - to find and kill a process which is using port 8080.
history - shows you the history of commands you executed)
history | grep ‘git’ - will show you all the history items with word ‘git’ in it

And now try Ctrl-R in terminal to open history search and type the desired search pattern.

I’m sure you know that Ctrl-C cancels a process execution. Did you also know that Ctrl-Z pauses a process by putting it in background and releasing a prompt for you? To bring back the paused process type fg (ForeGround).

2. Aliases

You can also set some aliases for your most used commands in ~/.bashrc file:

My aliases are:

alias ll=’ls -alF’
alias la=’ls -A’
alias l=’ls -CF’
alias cd..=”cd ..”
alias node=’nodejs’
alias s=’http-server -o -cors’

3. Prompt

As well as define custom prompt colors and git branch you are working on, like I have for example:

More details on how to do it here.

If you are developing web application and you are working on client side, I hope you write unit tests or test automation or even both like we do. The following tips might be helpful if you write your tests using Jasmine and execute it with Karma or Protractor.

I guess you already know that if you want to disable a specific test you can add “x” before describe or it statements?

xdescribe('my beautiful function', function() {
 xit('having fun', function() {


Did you also know that if you add “d” before describe or “i” before it, it will run only this specific test and skip the others?

ddescribe('my beautiful function', function() {
 iit('having fun', function() {


This is especially helpful when you need to focus on debugging specific test.

Hope you liked it. If you have more tips to share, please do.

A story about group of people who were passionate enough to try and make a change in Israeli elections process.

TL;DR We successfully built a social voting application in few days of work in our spare time. Released it two weeks before recent elections in Israel and got 12,000 people registered and voted in that period. All by using viral Facebook marketing.

Read on →

So true, about code quality. The only valid measurement of code quality: WTFs/minute.

Spotify Engineering Culture

I’m just going to put it here, as some great engineering culture references I beleve in: