I just wanted to tell you about a very cool and groundbreaking test automation UI which was developed by Maxim Guienis, my friend from eToro. It was designed for specific eToro needs to provide an easy “wizard like” tool for QA to write automatic UI tests.
Some interesting numbers based on my LinkedIn research I did related to choosing startup technology stack considering current situation on local market.
To get the data I just used Linked search and gave programming language name as keyword and filtered the results by location.
Hope you find it interesting and useful for you.
Do you think LinkedIn reflect the reality? Let me know what you think.
I spend few hours recently to learn about Vagrant and Ansible to understand better how these tools work together. I’m really excited of potential hiding behind them. Unfortunately not many developers/companies know these tools and using them. So I wanted to share with you what I learned and hope it will help you to discover a concept called “Infrastrucutre as a code” if you didn’t know about it before.
Basically Vagrant provides you with functionality to create, provision and configure your own isolated development environment. It can be a single machine or number of machines connected in private network. Depends on your application infrastructure. All it does is creates on your laptop a defined set of virtual machines using (by default) VirtualBox. These machines actually defines your development environment.
Vagrant saves its configuration in Vagrantfile which is stored as part of your projects root. It should be also saved in source control as part of your project as it defines how your environment look like and how to create it from scratch if needed. Click here to see how a typical Vagrant file looks like.
- Definition of your VMs (which boxes: Linux, Windows, etc…)
- Network settings (port forwarding, private network, etc…)
- Provisioning (how to setup a VM after creation from clean box)
- and more other but less useful stuff…
I won’t dive into more details. I hope you got the point and if you want to read more about why using Vagrant and what does it provide, you can go here.
Ansible is a lightweight, simpler and easier to on board alternative for widely known Chef and Puppet configuration management tools. Like their big brothers its mission is to allow smart provisioning, deployment and configuration management of servers.
Ansible uses a hosts file to define how your environment look like. You can create groups of identical servers as well. Groups are good to define a set of similar machines for later mass provisioning. Example of such file can be found here.
Ansible saves its configuration in YAML files called “playbooks” which actually defines how to provision server or number of servers. Example of such file can be found here.
Same as with Vagrant, you save these YAML files along side with your project, because they define how to build your environment.
You can read more about why using Ansible and what does it provide here.
Vagrant and Ansible integration
Vagrant as part of its configuration knows to call Ansible playbooks to provision the VMs that where created. It also automatically creates a hosts file to define the environment for Ansible and to make sure you will be able to provision the Vagrant machines from your laptop.
I don’t know if you heard the term “Infrastructure as a code”, but these tools actually does exactly that. Which means that you are writing code to define your infrastructure, saving it as part of source control and that you will be able to re-create it at any given time. Whenever it is in development, testing or production environments. That gives you great power to keep your environment infrastructure consistent, testable and repeatable!
Hope it was useful and made you think about how you should work. :)
I created the following example for you in GitHub if you want to play with it a little: https://github.com/virtser/vagrantansible
The following video is a little long, but its sooooo good! Please please please find time to see it.
It is talking about measuring business values through code and it’s importance.
I think it’s a must see video for developers and product people, it can connect us better.
There are many posts in the Internet about what is Graphite and how to work with it. In this post I want to talk about why any professional engineer (developer) have to work with Graphite.
During my Graphite implementation sessions with engineers in eToro I felt difficulty to explain developers what is Graphite and why they want to use it instead (or in addition to) other monitoring tools.
We tried variety of different logging and monitoring tools in eToro in the last year. We are using log4net to save our logs in files. We used to work with LogEntries to store our logs in cloud, but we decided that it doesn’t precisely fit our needs and now we almost finished the migration to Splunk. Splunk knows to do logs analysis better in addition to aggregations, correlations and dashboards.
We are also using New Relic which is a great SaaS tool. You just need to install an agent on your web server and you get a great visibility on your application performance including database and external dependencies. But we faced few problems with it. Most of our applications are written in .NET and some types of applications like WCF, Web API or ServiceStack were not well supported by New Relic (could be that they added a better support by now) and returned wrong numbers if at all. In addition we found in some unknown cases that New Relic agent installed on IIS caused random application pool restarts.
And now before explaining why Graphite, I have to explain breifly about what is Graphite. Very rughly - Graphite is a metrics collector. You can report withing the code of your application counts, timers and gauges. You can count how many times some method in your application was called or the time it took to execute this method. You give your metric a name separated by dots and Graphite knows to break it and show it as a tree in its UI (see the screenshot above).Read on →
We are in the process of building of solid APIs for each of eToro main domains. It is not an easy task at all, as we didn’t invest too much in the past 7 years of eToro existence in all the architectural aspects of the system.
How services interact with each other? Who is responsible of doing what? What are the standards that we are working with? Which tools or technology do we choose for specific task? All these questions were not answered until now. Today we start to talk and think about it along with progressing with our first real API - User API. It is the place where it all begins - user created for the first time, user details updated, etc…
Today we still have one Users table in our big database which holds all our customer records in addition to their basic details and more irrelevant to user stuff. Each application can update (or read) this table from code without going through a dedicated API. The problem with this approach is that there is no single source of truth - each and every service can apply its own business logic to change user related data, which leading to data inconsistency.
I’m taking active role of product/project manager for this API and leading the integration process with teams. Our main goals we want to achieve by User API implementation are:
Create the “Single Source of Truth”. All the requests to register user, update it’s details or get data of some user will pass through one API which hold the business logic of the user domain. We love ServiceStack and we are using their framework for our REST APIs.
Provide notifications of user changes to different services via Pub/Sub instead of polling database for changes like we do today. We are using RabbitMQ messaging here.
Separate User related data to it’s own dedicated database. This to isolate and encapsulate the user domain better. It will help in many aspects - automatic deployment of database changes with CD process, creation of environments with empty user database, other services won’t effect this database performance and vice versa. We are working with Microsoft SQL Server Data Tools to accompish this
Today the User API is almost ready, but can’t be completed because we need to migrate all the clients to use it instead of DB first. After all the clients which update customer records migrate, we will be able to enable caching, provide Pub/Sub functionality which can be trusted and eventually take out the user data our of the main DB to a dedicated one.
Lessons learned so far during the API building and integration process:
Don’t start it when it’s too late - invest in your architecture from the beginning or at least think about it. Now it takes so many time to make the integration and fix things which were done wrongly in the past.
Build a good plan - development plan (phases: establishment, optimization), integration plan (insert, update, select).
Don’t try to fix all evil at once, do it in small chunks, work iteratively. Otherwise it will hurt, it will bring more risk to migration.
Explain the importance of this move to the company - management and developers. It is required for engagement of people to give higher priority for integration than any other tasks.
Communication and follow up of progress are important to make the development and migration as fast as possible. You don’t want this project to take a lifetime.
Lets talk about Focus and its importance in our life. Bear with me.
I recently spent few hour of my time explaining to my colleagues from eToro broad management about how important is to make a focus. Many times, as a startup, we tempted to join a new venture or not to loose another opportunity of business development.
Today, when we are not a startup company anymore, these temptations became a solid products which are not a core of our business and they defocus us from the real product. We have less time to invest to make our main product better because we have to support the other products which became not important anymore.
For example, historically we developed support for many types of payments and we keep on supporting all of them because some users want them. But instead, if we choose to support only Credit Card and PayPal, but do it good, even perfect, we could generate the same amount of volume if not even more!
If we build a good product, people will use it. Lets make such product, so even customers who have no Credit Card or PayPal account will want to create one just in order to try our product. Just like I did this week with Netflix, which is available for U.S. customers only. I opened a US PayPal account and bought a US VPN to have US IP address and now I watch Netflix in my TV with Chromecast. This is because they built amazing product!
So I spoke with about 10 people from company broad management about focus and how we can say NO to things which are not important. I hope that I could transfer the message.
Meanwhile I succeed myself to push and shut down few legacy products which were running for a long time on background and nobody thought to stop them. I also was involved in “not starting” a new products initiative which are not in our main business and I’m proud of it. Unfortunately some people take this initiative personally, but these are not my intentions.
Say NO! It is more important to say what you are not doing than what you are doing, define non-goals. Make a focus on stuff that really matters, the deal breakers, the stuff that are core of your business. If you can use SaaS services, use them, don’t build it yourself. And of course you can apply this rule not just in your work, but in your real life.
“People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I’m actually as proud of the things we haven’t done as the things I have done. Innovation is saying no to 1,000 things.” – Steve Jobs
To alter slightly, this one is not a technical R&D world related post, but my photograpjy hobby related one. Hope you don’t mind? Will followup with pictures. :)
My Fuji X100S is on its way to me. I’m so excited about it that I started to learn more about it before I actually got it.
It took me a while to read all the articles in the Internet about the camera until I decided to order one and took some notes with all the useful pieces of information that I found in the Internet with references for myself.
But I think that it would be a good idea to share this list with you in random order as I noted (some notes may repeat). I hope you can find something new and useful there for you too.Read on →
Our talk with Dvir Greenberg for DevOpsDays TLV 2013 conference:
DevOpsDays TLV conference is over and it was awesome! Learned a lot, listen to some very cool presentations, met new people and even give a talk!
I’m not the guy for great speeches, but still it was an amazing experience for me. People asked questions and was interested in the process we are running. We tried to be as open and transparent as possible, showing the good and bad things in the process. I hope it worked and we will be able to help others to kick-off the process in their organizations. Will be happy to get your feedback.
Find below the slides from our presentation about DevOps implementation in eToro:
Composed and presented by Dvir Greenberg and David Virtser.
If you have some questions or want to talk about DevOps implementation process in medium size organization, don’t hesitate to contact me.
Our speech recording will follow.