Windows package manager

Chocolatey is a decent package manager for Windows OS, it is usable and has an updated list of packages.

Package manager provides a way to install/uninstall the software from CLI.

If you are not a CLI user, Chocolatey package manager is a good way to start using CLI.

What is the benefit of CLI package manager software?

For me personally, the benefit is in spending less time in operations of installing, updating and uninstalling software.

The traditional workflow of installing software is usually the following:

  1. open web a browser
  2. search for webpage from where the software can be downloaded (just this can take a long time)
  3. open the desired webpage and search for from where to download software
  4. then initial download
  5. after the download is completed, start the installation and do “Next-Next-Finish”
  6. delete the original installation file

As you can see there are 6 steps in the workflow minimally.

With CLI package manager all, I need to do in CLI is to type choco install SOFTWARE and everything else is done automatically.

This is more productive.

If you want GUI there is also a package for that.

The first thing to do before you start to write code

Published on: 01.05.2019

Let me start with a personal true story.

In 2014 I was working on a personal project of making IOS card game called Tablic, I spend more than 200 work hours on did 80% of the original plan and never finished it.

Do you know why?

Because of software scope creep.

Originally I had the idea to make a game with one player vs. computer.

As I was near completing that goal I started adding additional features: I could add 4 players, a user could select AI difficulty.

Each additional features meant more development time: new code and changes to an existing one.

And after all this, I decided to add multiplayer support over local WiFi and over the internet.

This last decision has truly killed the project completely.

In order to implement multiplayer, I had to write a lot of additional code and to change existing architecture, it was at least an additional 300 hours and after more than 200 hours already spend I decided to take a small break (like a week) but later never continued.

The reason why I did not want to release it without multiplayer was that I was thinking it is not good enough, other games had multiplayer, how can I make one without it.

I was always thinking that I will continue and finish it one day, but that day never came, today I am thinking that it is probably better to rewrite it to Unity than to continue in Objective-C (but that is a discussion for another time).

Years later when I was analyzing why I never finished that IOS game, I came to the conclusion that the original problem was because I did not have a specification of the first version.

By specification, I just mean a list of feature, with dependencies between them, basic UI scatch and time estimates.

As I was completing some features I continue adding new ones indefinitely.

I am pretty sure if I did implement multiplayer I would add some other features also.

Today I am wiser or I just think so.

Now I have a process for writing software.

Before I write code I decide what is MVP that I will make, without even thinking about additional features.

The reason why I do not even want to write addition features is that I have learned that even when I make software for myself, software what I make is not software that I need.

Define what kind of minimal features you need to have in your software, make dependencies between them and time estimates how long it will take to make them.

I do time estimates in pomodoros (25 minutes increments) but other time units can be used.

You can not understand the solution until you had the problem

Published on: 01.04.2019

I understand RESTful web services, or at least I think I do.

I agree that when you have huge teams and code base it makes sense to cut them in small independent pieces and connect them via queues and HTTP.

Collaboration on large software projects is hard and problems are increasing exponentially with a number of people added to the project.

The tradeoff is that the overall speed of your software will decrease (because of HTTP networks calls), but you will get software system that can be maintained and new features added without the need of understanding/changing/impacting whole system.

But I never found a use case for myself as somebody who is one man team working on his own projects.

Until one morning.

Architecture

I have a lot of (around 10) independent software programs that are running on daily (some even every hour) interval.

Most of them are doing some variation of web scraping, storing, analysis and reporting of results via email.

This was all fine until one morning I woke up and saw there where was no emails from my software.

I know that something was not right.

They all use yagmail for sending email, so I was thinking that there is some problem with that, because it is a single point of failure.

After an investigation, I found out that the problem was with Gmail itself it just stopped working, the next day it was fine, so they just had some issue that they need one day to resolve (I am not talking about Gmail web page, but with SMTP username/password authentication).

Why Gmail

Why do I use free Gmail for sending an email and not some more reliable service like SendGrid or Amazon SES?

That is a nice lecture in technical debt, in essence, what was a good idea for an initial requirement, as time progress and requirements or circumstances change, it is not so good idea anymore.

When I started with my first project in development as proof of concept Gmail was an excellent choice: easy to start and working fine.

As the project moved to deployment an additional projects where made it was easy to copy/paste the existing code than to refactor/redesign/rearchitect existing working solution.

REST solution

Emails did not work one day for me and after one day everything was back to normal.

I started to think about what can I do to avoid this problem in the future.

One solution would be to change from Gmail to something else. but here are a few issues that I do not like.

First issues

What if other solution (email provider) stops working in the future, I would again need to write new code for the third solution.

To fix this problem my idea is to use Gmail as primary providers for email sending if email sending fails I will just use a secondary email provider.

With this logic, I can add the third one also and so on, but I think that two are enough for the first version.

Second issues

Currently, I have around 10 apps (and this number will increase with new apps that I plan to do in future) that need email sending, each has a separate code base repository.

If I want to change something in email logic, even something simple ae username/password I need to do same change it in 10 different code bases.

One solution is to make one code base just for sending emails, this would solve the problem of the same changes in multiple apps.

But in order to work, I need to change the folder structure all my apps, update paths in the code bases, and I can use this only if all apps are in the same machine hosting.

If they are on separate machines it will not work.

REST to the rescue

After understanding all the difficulties, making RESTful web services just for email sending made total sense to me.

The only reason why it made sense to me is that I have a use case where REST is useful and look like the only solution.

The first version will just be adapter/facade around yagmail with REST API, but that is a story for another time.

Billable and non-billable hours

Published on: 15.03.2019

This is written by the consultant/freelancer point of view.

Amateurs are focused on billable hours.

What is your hourly rate?
50$ per hours.
That is not bad, 8h time 50$ is 400$ per day.
20 working days per month is 8000$ every month.
96000$ per year, not bad.

There are a lot of problems in real life with this logic.

First, there are no expenses (tax, equipment, rent, etc) mentioned, like they do not exist.

But I do not want to focus on expenses in this analysis.

Also, can you bill for 8h every day for a year?

If on an average day you sell only 2h than calculations are much different (and not in a good way).

What about all the work that you need to do to sell 1h of your time?

If you need to spend one addition 1h in order to sell 1h of your time then your hourly rate is cut in half.

If you work for somebody else and you are not a remote worker, then your commute time is an example of non-billable hours.

Different businesses have different portions of billable and non+billable hours, just be sure to calculate that in your equations.

In coding business, and by coding business I mean if you primarily write code, lots of non-billable hours are spent on keeping with technology (even if you are highly specialized).

If you are self-employed non-billable hours are also needed for finding clients/work, infrastructure maintenance, etc.

If you take all that in the account than 50$ per hour and minus expenses, does not look so go anymore.

And you come to the understanding of why it should be more.

Verification vs Validation in practice

Published on: 01.03.2019

Verification is the process of checking that the software meets the specification.

It is doing what you wanted it to do.

An example could be that function need to add two numbers, then you verify (like write unit test) that it is correctly doing that.

Validation is the process of checking whether the specification captures the customer’s needs.

Using the example of the function need to add two numbers verification need to confirm that this function is really what user need, eg. maybe you need to multiply two numbers.

Practice vs theory

When I first head about validation I was able to understand it in theory, but in practice, I was thinking it is easy to know what you want why do you need to validate it.

Then I had personal experience of why and how validation is hard.

I build wrong software for myself and I had no one else to blame.

How I build wrong software for myself

My idea was to make software that will be run at 1 AM every day, will take all real-estate ads from https://www.njuskalo.hr/ for my town listed on a previous day, sort them by price for a square meter and send them to email.

Basically, I wanted all new ads per day to my email, sorted by price for a square (one day delay was fine for me).

Looks simple enough, what could go wrong?

After a few days and I had it running in production and it was working, verification was successful, I every day I got all ads from the previous day.

Why validation was wrong

After a week I found out that my software was useless.

What was the problem?

Rember that I wanted to get “I wanted all new ads per day to my email”, I wanted all “new ads per day”, but what I got was all updated and new ads per day.

Let me explain.

Every day I was getting around 200 ads per day and I noticed that a lot of them were the same ads, day after day.

What was happening is that a lot of people were just updating the same ad every day.

And they are doing this so that their ad is always on the first page, sometimes they even do it a few times per day (later I found out that a friend of a friend was contracted by one local real-estate agency to make software that will automatically update ads for them).

Altho my software was working correctly, only after I have made it I found out it is useless because of wrong assumptions.

My assumption was that every ad will be added only once, not that 60% of adds will be updated every week.

I have solved this problem by making version two that could know if an ad is new or updated and if updated what was updated.

Am I stupid

This experience was fascinating to me.

On this project, I was everything: user, project manager, architect, coder, quality assurance, investor, every hat was on my had and I manage to build the wrong thing.

It gave me a practical understanding of why it is common that the end user is not happy with the finaly product.

Even if everything is done correctly it is possible that the final product is not solving user original problem due to wrong initial assumptions.

How to improve validation

One approach is to make MVP, in this way you will spend fewer resources on version one.

If validation of MVP is correct, then add additional features, if not cancel it

Another approach is to get some domain knowledge ether internal or external.

I had built a few web-scrapers in the last few years and now know a few tricks about that domain, but I learned each on the hard way.

I also understood why some companies hire domain experts consultants (just be sure to have a good one).

Technology

For those interested in what tools did use to build my software here is the list: Scrapy, dataset, yagmail.

Raspberry Pi 2 B

Published on: 15.02.2019

After 3,5 years of owning and never using (let us talk about not being a finisher later) Raspberry Pi 2 B I finally decided to see for what it can be used in practice.

Yes, I know that now already Raspberry Pi 3 B+ exist and that it has better performance.

I did use the original Raspberry Pi in 2012 to make a simple version of Screenly, but that is a story for another time.

First, I went to the official webpage to see what is new and how to install operating systems (I will use abbreviations OS from here on).

There I have found out that the best way for beginners is to use NOOBS.

My experience with NOOBS is positive, there are few OS that can be installed with it.

Another software with similar capabilities is BerryBoot, it has some additional OS that NOOBS does not have, but also NOOBS have an OS that
BerryBoot, so best is to try both.

One tip, label of miniSD card must be called boot.

Just for information, all tests have been done on 22inch DELLmonitor.

Using Raspberry Pi 2 B as desktop PC

My test was basically how good youtube is working.

Surprisingly better than I expected, Raspberry Pi 2 B on Raspbian is able to play 360p videos with no problems, even 480p video worked, but the video had some lags but the audio was fine.

Form my opinion the biggest drawback for using any Raspberry as a desktop computer is lack of software (eg. no DropBox) because the software needs to be compiled for RISC processor.

Retropie

Retropie is OS for playing retro games.

NES and SNES games were fine, but N64 (N64 was my first 3D console, so there is an emotional connection) was not so good (GoldenEye on 10 FPS, not playable).

Raspberry Pi 3 is working better according to Reddit.

CCTV

Using motioneye as CCTV on Raspberry Pi 2 B is working better than expected.

You can add a local camera to motioneye or even remote IP camera.

Motion detection is supported with upload to FTP, DropBox, Google Drive, etc.

Just be aware that you can not expect the same quality as of specialized dedicated CCTV system, but it is perfect for a hobby project.

Public Facing Screens

You can use Raspberry Pi 2 B as a public facing screen, one useful OS for that purpose is Screenly.

It offered as a monthly subscription model, but if you use it just in a local network, it is free.

Tools and links

The tools:

balenaEtcher cross-platform tool to flash OS images to SD cards

SD Memory Card Formatter formats SD/SDHC/SDXC Cards

Links:

https://www.trustedreviews.com/reviews/raspberry-pi-3-performance-and-verdict-page-3

https://www.jeffgeerling.com/blog/2018/raspberry-pi-3-b-review-and-performance-comparison

FOSCAM FI8908W network IP camera

Published on: 01.02.2019

I bought this camera from Deal Extreme, almost a decade ago.

Anyway, if plan to use this camera first step is to upgrade to the newest firmware, on official website I found nothing, but they do have some software tools, luckily there are instructions on Deal Extreme Forums.

I did my firmware upgrade step by step, from a version on my camera to the next firmware version.

Maybe upgrade to last would firmware work also, but I did not want to take unnecessary risk and brick my device.

There is an upgrade for System firmware and Embedded Web UI first upgrade System firmware and then Embedded Web UI.

Also, there is similar model fi8918w, AFAIK last firmware version should not be upgraded to this model.

Be sure to understand procedure from Deal Extreme Forums before starting the firmware upgrade process.

My experience with a version with last firmware

The camera is giving 640×480 image, during the daylight it is acceptable, during the night you can see that somebody is there but not who, so it is not useful in a night mode.

Movement of a camera is possible, it is working fine but web UI for control is not working on IPad only regular PC browsers.

The device should offer an audio connection, but I have not used it

WiFi connection I have used, I just trust the cable more :-).

The camera also has an IO port for motion detection and trigger, but I have not used it.

There is also a possibility of motion detection, recording of video/pictures and sending to email and FTP upload.

I have not tested this feature because it is only working on Microsoft Internet Explorer so to use it you have to have one PC with it always running

Electricity consumption is about 3.5 till 4 watts/hour on 230 voltage.

I did notice that it need to be restarted once per week, otherwise, it will just get stuck.

Should you use this camera

It depends on what purpose you want to use it.

If you want to have a daylight live stream without recording and alarms and it is OK for you to restart it every week that it is fine.

Also one decade ago this was a great device for its price, today it is better to get a camera on Raspberry Ri with motionEye.

If you plan to use this camera with motionEye add it as Network Camera.

The most important lesson for new programmers

Published on: 15.01.2019

On my “Sending email from Python” blog post what was cross-published on Medium I got a comment asking how to send an email via outlook programmatically.

My first reaction what that this is some troll or bot.

So I did “Let me google that for you” answer and later got “Thank you” response.

That got me thinking, maybe he was not an internet troll, maybe he just does not know how to google.

It never crosses his mind that he can ask google for the answer.

Why I was thinking that somebody was trolling me

I am an experienced (15+ years) software developer, I am experienced because I know that when I do not know something first I google it, that I search on youtube and the last resort is to ask StackOverflow.

This is what professionals do, they do not ask questions on random blogs in hope that somebody will respond.

Learn how to google

For beginners learning to code, best what you can do for yourself (and other) is to learn to google what you do not know.

Today it is easier to learn coding than 20 years ago when I was starting.

At my time the only thing that you had was a book (if you were lucky).

Today there are much more opportunities to learn:

  • you have Youtube today what is the largest free video learning tool
  • google for asking
  • and StackOverflow communities where you can ask questions

Be aware that you should not ask a question specific to your particular coding problem, just bring it to a more abstract level.

Tips on googling

From my experience, it is important to know which keywords to google.

But if you do not know keywords you can always start with “how to …..”.

Any action is better than no action.

Most programmers are financial morons

Published on: 01.01.2019

Let me start with one true story from the year 2011.

At that time I was working as a software programmer (90% C++) in a team of 5 people.

One morning, a friend from team started showing cool new source code editor called Sublime Text.

He was very happy with it, he used it on the job, for his own pet projects, and for his freelancing side jobs for almost few months.

But for him Sublime Text had one drawback, he had to pay 100$ for it (at time of this writing Sublime Text license is 80$, but I think that at that time it was 100$, but I could be wrong).

At that time I know that my friend is a financial moron.

I tried to explain to him, using same logic like in this blog post, but he just could not get it, he only understood that he has to spend money.

Why most programmers are financial morons

Let say that he was only using Sublime Text every second day (altho, knowing him it was probably every day).

With every second day assumption that is 182 day per year.

He was happy with a new tool, it was better for him, so let us say that he got 10 extra minutes of work every day.

10 minutes times 182 days is 30 hours of work more per year.

To get a break even he would need to make 3,33$ per hour of work.

Even at that time, he was charging his freelance rate at 20$ per hour and he had around 5 hours of billable work hours per week.

He is a smart guy, but he was thinking that toll is expensive.

Economically speaking, he does not know how to do a cost-benefit analysis.

It is strange how logically intelligent programmers (believe me, you do have to be logically intelligent to write computer programs) never invest in tools that basically have ROI in days.

Conclusion

Do cost benefit analysis before saying that something is expensive or cheap.

Disclaimer:
I have no interest do you use or buy Sublime Text or not, I am just using it as an example.

Do not use Selenium for web scraping

Published on: 15.12.2018

Disclaimer:
This is primarily written from Python programming language ecosystem point of view.

I have noticed that Selenium has become quite popular for scraping data from web pages.

Yes, you can use Selenium for web scraping, but it is not a good idea.

Also personally, I think that articles that teach how to use Selenium for web scraping are giving a bad example of what tool to use for web scraping.

Why you should not use Selenium for web scraping

First,Selenium is not a web scraping tool.

It is “for automating web applications for testing purposes” and this statement is from the homepage of Selenium.

Second, in Python, there is a better tool Scrapy open-source web-crawling framework.

The intelligent reader will ask: “What is a benefit in using Scrapy over Python?

You get speed and a lot of speed (not Amphetamine :-)), speed in development and speed in web scraping time.

There are tips on how to make Selenium web scraping faster, and if you use Scrapy then you do not have those kinds of problems and you are faster.

Just because these articles exist is proof (at least for me) that people are using the wrong tool for the job, an example of “When your only tool is a hammer, everything looks like a nail“.

For what should you use Selenium

I personally only use Selenium for web page testing.

I would try to use it for automating web applications (if there are no other options), but I never had that use case so far.

Exception on when you can use Selenium

The only exception that I could see for using Selenium as web scraping tool is if a website that you are scraping is using JavaScript to get/display data that you need to scrape.

Scrapy does have the solution for JavaScript with Splash, but I have never used it, so far I always found some workaround.

What to use instead of Selenium for web scraping

As you can guess, my advice is to use Scrapy.

I choose Scrapy because I spend less time developing web scraping programs (web spiders) and execution time is fast.

I have found Scrapy to be faster in development time because of a Scrapy shell and cache.

In execution, it is fast because multiple requests can be done simultaneously, this means that data delivery will not be in the same order as requested, just that you are not confused when debugging.

What about Beautiful Soup + Requests

I have used this combination in the past before I decided to invest time in learning Scrapy.

Do not make the same mistake as I did, development time and execution time is much faster with Scrapy, than with any other tool that I have found so far.

Last words

This is not rant about using Selenium for web scraping, for not production system and learning/hobby it is fine.

I get it, Selenium is easy to start and you can see what is happing in real time on your screen, that is a huge benefit for people starting to do/learn web scraping and it is important to have this kind of early moral bosts when you are learning something new.

But I do think that all these article and tutorial using Selenium for web scraping should have a disclaimer not to use Selenium in real life (if you need to scrape 100K pages in a day, it is not possible to do it in single Selenium instance).

To start with Scrapy it is harder, you have to write XPath selectors and look at source code of HTML page to debug is not fun, but if you want to have fast web scraping that is the price.

Conclusion

After you learn Scrapy you will be faster than with Selenium (Selenium just have a lower-angle learning curve), I personally needed a few days to get the basics.