LinkedIn views chart

I left eToro about 2 months ago and started to look for a new challenge. It was my first time actively looking for a job and trying to “sell” myself. :)

Below you will find a list of things I learned from the process of looking for a new job:

  1. If you can afford it, leave your current job before you found a new one. It gives you more time to look for a new job and you can do it transparently without hiding. That’s what I did.

  2. At the beginning, you have to understand what kind of job and in which type of company you are looking for. This is good for training on your pitch and to make a focus. When you know what you are looking, you can decline job opportunities which looks less appealing for you.

  3. Looking for a new job is fun. You have the opportunity to meet a lot of new interesting and sometimes inspiring people. Quit your job now! :)

  4. Don’t trust recruitment agencies. Their recruitment procedures are outdated and not aligned with real market needs. Use social networks and your connections.

  5. LinkedIn is working. Add a new company and call it “Looking for new challenges/opportunities” in your LinkedIn profile. You will receive 300% increase in profile views and at least one new job opportunity a day in your private inbox.

  6. Look for people (team) and culture, but not for product, company or technology stack. You have to work with people at the end of the day.

  7. It takes time to open your mind after a long period employed in one place. Meet old friends who were working with you and who you respect, see what they are doing. It will give you more ideas or maybe you can join forces.

  8. Learn something new, you have plenty of free time now. Read books you always wanted to read, but couldn’t do it because you were busy at work. Learn that new technology and hack something. Spare more time with your family, kids, yourself. You won’t have this time for long.

Let me know what you think and if you have some other findings.

Writing this post of good learning resources as a reference for myself.

The purpose is to refresh a memory, skills and get updated with latest technologies like HTML5, CSS3 and AngularJS.

HTML, CSS and JavaScript

AngularJS

Feel free to suggest other resources.

If you are working with Sublime Text 2/3 and you want to be able to see live changes on your HTML/CSS layout, follow the steps below:

  1. Find and install a new package from SublimeText called “LiveReload”. Set "save_on_focus_lost" :true in user settings to enable auto save functionality.

  2. Add LiveReload Chrome extension which you will have to enable in order to activate LiveRecord on page you are working at. Make sure to select in extensions management screen ” Allow access to file URLs” to enable this functionality on local files as well.

  3. Install and run free livereload node.js server by running the following terminal commands:

1
2
$ npm install -g livereload
$ livereload [path]

I just wanted to tell you about a very cool and groundbreaking test automation UI which was developed by Maxim Guienis, my friend from eToro. It was designed for specific eToro needs to provide an easy “wizard like” tool for QA to write automatic UI tests.

It is called APPLENIUM (APPium + seLENIUM) and it uses Appium and Selenium frameworks as helpers to make mobile and web testing easy.

Find the source code or get more information about it on his blog.

Some interesting numbers based on my LinkedIn research I did related to choosing startup technology stack considering current situation on local market.

To get the data I just used Linked search and gave programming language name as keyword and filtered the results by location.

Hope you find it interesting and useful for you.

Do you think LinkedIn reflect the reality? Let me know what you think.

I spend few hours recently to learn about Vagrant and Ansible to understand better how these tools work together. I’m really excited of potential hiding behind them. Unfortunately not many developers/companies know these tools and using them. So I wanted to share with you what I learned and hope it will help you to discover a concept called “Infrastrucutre as a code” if you didn’t know about it before.

Vagrant

Basically Vagrant provides you with functionality to create, provision and configure your own isolated development environment. It can be a single machine or number of machines connected in private network. Depends on your application infrastructure. All it does is creates on your laptop a defined set of virtual machines using (by default) VirtualBox. These machines actually defines your development environment.

Vagrant saves its configuration in Vagrantfile which is stored as part of your projects root. It should be also saved in source control as part of your project as it defines how your environment look like and how to create it from scratch if needed. Click here to see how a typical Vagrant file looks like.

Vagrantfile includes:

  • Definition of your VMs (which boxes: Linux, Windows, etc…)
  • Network settings (port forwarding, private network, etc…)
  • Provisioning (how to setup a VM after creation from clean box)
  • and more other but less useful stuff…

I won’t dive into more details. I hope you got the point and if you want to read more about why using Vagrant and what does it provide, you can go here.

Ansible

Ansible is a lightweight, simpler and easier to on board alternative for widely known Chef and Puppet configuration management tools. Like their big brothers its mission is to allow smart provisioning, deployment and configuration management of servers.

Ansible uses a hosts file to define how your environment look like. You can create groups of identical servers as well. Groups are good to define a set of similar machines for later mass provisioning. Example of such file can be found here.

Ansible saves its configuration in YAML files called “playbooks” which actually defines how to provision server or number of servers. Example of such file can be found here.

Same as with Vagrant, you save these YAML files along side with your project, because they define how to build your environment.

You can read more about why using Ansible and what does it provide here.

Vagrant and Ansible integration

Vagrant as part of its configuration knows to call Ansible playbooks to provision the VMs that where created. It also automatically creates a hosts file to define the environment for Ansible and to make sure you will be able to provision the Vagrant machines from your laptop.

I don’t know if you heard the term “Infrastructure as a code”, but these tools actually does exactly that. Which means that you are writing code to define your infrastructure, saving it as part of source control and that you will be able to re-create it at any given time. Whenever it is in development, testing or production environments. That gives you great power to keep your environment infrastructure consistent, testable and repeatable!

Hope it was useful and made you think about how you should work. :)

I created the following example for you in GitHub if you want to play with it a little: https://github.com/virtser/vagrantansible

The following video is a little long, but its sooooo good! Please please please find time to see it.

It is talking about measuring business values through code and it’s importance.

I think it’s a must see video for developers and product people, it can connect us better.

There are many posts in the Internet about what is Graphite and how to work with it. In this post I want to talk about why any professional engineer (developer) have to work with Graphite.

During my Graphite implementation sessions with engineers in eToro I felt difficulty to explain developers what is Graphite and why they want to use it instead (or in addition to) other monitoring tools.

We tried variety of different logging and monitoring tools in eToro in the last year. We are using log4net to save our logs in files. We used to work with LogEntries to store our logs in cloud, but we decided that it doesn’t precisely fit our needs and now we almost finished the migration to Splunk. Splunk knows to do logs analysis better in addition to aggregations, correlations and dashboards.

We are also using New Relic which is a great SaaS tool. You just need to install an agent on your web server and you get a great visibility on your application performance including database and external dependencies. But we faced few problems with it. Most of our applications are written in .NET and some types of applications like WCF, Web API or ServiceStack were not well supported by New Relic (could be that they added a better support by now) and returned wrong numbers if at all. In addition we found in some unknown cases that New Relic agent installed on IIS caused random application pool restarts.

And now before explaining why Graphite, I have to explain breifly about what is Graphite. Very rughly - Graphite is a metrics collector. You can report withing the code of your application counts, timers and gauges. You can count how many times some method in your application was called or the time it took to execute this method. You give your metric a name separated by dots and Graphite knows to break it and show it as a tree in its UI (see the screenshot above).

Read on →

We are in the process of building of solid APIs for each of eToro main domains. It is not an easy task at all, as we didn’t invest too much in the past 7 years of eToro existence in all the architectural aspects of the system.

How services interact with each other? Who is responsible of doing what? What are the standards that we are working with? Which tools or technology do we choose for specific task? All these questions were not answered until now. Today we start to talk and think about it along with progressing with our first real API - User API. It is the place where it all begins - user created for the first time, user details updated, etc…

Today we still have one Users table in our big database which holds all our customer records in addition to their basic details and more irrelevant to user stuff. Each application can update (or read) this table from code without going through a dedicated API. The problem with this approach is that there is no single source of truth - each and every service can apply its own business logic to change user related data, which leading to data inconsistency.

I’m taking active role of product/project manager for this API and leading the integration process with teams. Our main goals we want to achieve by User API implementation are:

  1. Create the “Single Source of Truth”. All the requests to register user, update it’s details or get data of some user will pass through one API which hold the business logic of the user domain. We love ServiceStack and we are using their framework for our REST APIs.

  2. Provide notifications of user changes to different services via Pub/Sub instead of polling database for changes like we do today. We are using RabbitMQ messaging here.

  3. Separate User related data to it’s own dedicated database. This to isolate and encapsulate the user domain better. It will help in many aspects - automatic deployment of database changes with CD process, creation of environments with empty user database, other services won’t effect this database performance and vice versa. We are working with Microsoft SQL Server Data Tools to accompish this

Today the User API is almost ready, but can’t be completed because we need to migrate all the clients to use it instead of DB first. After all the clients which update customer records migrate, we will be able to enable caching, provide Pub/Sub functionality which can be trusted and eventually take out the user data our of the main DB to a dedicated one.

Lessons learned so far during the API building and integration process:

  1. Don’t start it when it’s too late - invest in your architecture from the beginning or at least think about it. Now it takes so many time to make the integration and fix things which were done wrongly in the past.

  2. Build a good plan - development plan (phases: establishment, optimization), integration plan (insert, update, select).

  3. Don’t try to fix all evil at once, do it in small chunks, work iteratively. Otherwise it will hurt, it will bring more risk to migration.

  4. Explain the importance of this move to the company - management and developers. It is required for engagement of people to give higher priority for integration than any other tasks.

  5. Communication and follow up of progress are important to make the development and migration as fast as possible. You don’t want this project to take a lifetime.