Growing X20 without spending an extra penny on hosting

If your website is a social network then this post is probably not for you. If you have a blog, a news site or an e-commerce site, it might!

This shot from NewRelic compares the load on the server on the cyan line with the pageviews on the yellow line before and after a push notification is sent. 

image

We acheived these great results by a fanatical use of CDN and integrating it really deep into our servers. We constantly recited that disconnecting the correlation between pageviews and CPU will allow us to scale without really scaling hardware.

A year ago we started a big change in FTBpro’s website. We have changed the design totally, moved to a single page architecture and started exploring new ways to minimise load on our servers. Later we have implemented the lessons we learned and the methodologies we have developed on our mobile API. On the outside, the result is FTBpro.com site and mobile app as you know it today. We had two goals in mind - make the user experience faster and lower the load on the servers. This post is about the latter. 

Disconnect the correlation between Pageview and CPU

We need to make sure CPU power is not wasted - twice, thrice and more for the same data. Why SQL twice when you know nothing changed…

If a page is called twice in a reasonable timeframe don’t rebuild it. This can be achieved via full page caching in one of three levels:

  1. Application Level (e.g. Rails, Java, PHP)
  2. Middleware (e.g. Varnish)
  3. by CDN

The first and second approaches are slightly easier to manage as they are contained within your own server but they don’t eliminate the correlation between Pageview and CPU - they do take it to minimum. If the first page took 200ms to render the cached version could be returned at 1ms or even less. With CDN the cached version is not even hitting the main server so the correlation can be defined as “disconnected”. The downside of using a CDN is usually the hassle of choosing one, setting it up, and get a good contract - there are tons of CDNs out there and many small parameters to distinguish between them.

The naive approach for full page caching is setting an expiration time on a page (e.g. 15 minute) so once every X minutes it is expired and the CDN takes the freshest version from the server. That’s okay and very easy to manage but it has two disadvantages:

  1. Data updates don’t appear straight away to the user.
  2. There will still be correlation between pageviews and CPU, even if low one.

In order to overcome the second disadvantage we can just set the expiration time to never ;-) but now our page will surely be obsolete at some time - as any editorial update won’t be reflected. This can be solved by using a CDN provider that has purge or load APIs. In the server layer attach an expiration event to the classes that are in charge of updating data to these pages (e.g. Post#after_save in a Rails app). Most CDN providers has these APIs but there are two important criteria that differ from one to another:

  1. Speed: some CDNs purge at 200ms, some at 1min, some at 45min
  2. Purge criteria: some CDNs allow purge by exact URL, some allow REGEXPs, some force you to “tag” each URL in the HTTP response headers and purge by those tags (much more work but can give the best results in a few cases)

So what did we do?

  1. Configured the CDN to keep all our pages for ever, never expire.
  2. Modified URL structure of the APIs (mobile & web APIs) to be in a pattern that is purgeable. For example - our CDN couldn’t purge based on query string parameters; We had to modify URLs to have a restful structure and /feed?team=arsenal had to change to /feed/arsenal.
  3. Added “expirators” to our different models. Whenever a post is saved we expire it’s URL & the URLs of feeds that should contain it. e.g. Updating a post about a game between Arsenal and Barca will expire the url of the post itself, Arsenal’s feed, Barca’s feed, Premier League feed and La-Liga feed (both on mobile and web)
  4. Before sending push-notification on a post, it is automatically preloaded to the CDN. At these times we can get up to 100k requests a minute to the website and none of them is reaching our servers.
  5. Added an Application level full-page caching layer with MemCached after realising CDN is constructed of many different independent servers which will all hit our application server if they don’t have the cached version, creating a real load on them.

What happened?

  1. The user experience became much better because all requests are served from light&fast CDN servers that are geographically near the user.
  2. We use the exact same server resources for 100m pageviews we have today as we used for 5m pageviews we had 7 months ago.
  3. We chose a CDN that fit our needs. We pay them a small fraction of what we payed our former CDN with a x20 increase in load.

That’s a good opportunity to praise Edgecast, the CDN which we use. They have exceeded our expectations in every parameter:

  1. Amazing quality of service. They respond fast to emails, they are available on the phone and just stay there and give service for as long as it takes.
  2. Technology. Their user interface is a bit sluggish but it allows us to really go crazy and set different configuration rules based on our wild url structure. And they purge fast - a few seconds to 1-2 minutes per purge. 
  3. Great price. That wasn’t the main criteria in choosing CDN but it happened to be very affordable nonetheless.

by Dor Kalev, CTO @ FTBpro

image

Ruby 2.1 - Our Experience

We’ve recently moved FTBpro’s Ruby on Rails servers to the newest Ruby version on earth - Ruby 2.1. It has been running on our production servers for the past two weeks. Our stack includes: MySQL, MongoDB, Rails 3.2, ElasticSearch Memcached and Redis. We wanted to share our experience of making this change.

Incompatibilities

1. First thing you encounter when you move to Ruby 2.1 is the non-working net/http module. As explained here, Ruby 2.x Net/HTTP library asks for gzipped content by default, but does not decode it by default, which makes some JSON parsing of HTTP requests break. This breaks koala gem, right_aws gem and many other gems which relies on JSON HTTP communication to operate. The solution to this is a small patch to the net/http library. We have put it in our config/initializers/a_net_http.rb so that Rails loads it upon boot.
The patch:

require 'net/http'
module HTTPResponseDecodeContentOverride
  def initialize(h,c,m)
    super(h,c,m)
    @decode_content = true
  end
  def body
    res = super
    if self['content-length'] && res && res.respond_to?(:bytesize)
      self['content-length']= res.bytesize
    end
    res
  end
end
module Net
  class HTTPResponse
    prepend HTTPResponseDecodeContentOverride
  end
end
        

UPDATE: Looks like this is a specific bug with right_http_connection which monkey-patches Ruby’s net/http and breaks it. You can read more about it in this thread.

2. All of the mongoDB users on the room pay attention: current stable version 0.12.0 of the mongomapper gem does not support ruby2.x. We upgraded our gem version to 0.13.0beta2, and in combination with the net/http patch in bullet 1 it works like a charm.

3. If you are a fan of the debugger gem you’ll have to say farewell. It does not support ruby2.x in any manner and causes nasty segmentation faults with long outputs. The good news is that there is a very good replacement: byebug gem. Its interface is almost similar to that of the debugger gem so you’ll feel right at home, and it works well with ruby2.x.

4. If you’re using imagesize gem to determine the height/width of images you’ll have to find a replacement. We already had Rmagick in our gemset which includes image dimensions retrieval so we just used it.

5. We had a weird bug with the BigDecimal library in ruby 2.1. Here is the output of the exact same code under ruby1.9.3 and ruby 2.1:

#ruby 1.9.3
require 'bigdecimal' ; require 'bigdecimal/util'; (0.5.to_d / 0.99.to_d).to_f # => 0.505050505 
 
#ruby 2.1.0
require 'bigdecimal' ; require 'bigdecimal/util'; (0.5.to_d / 0.99.to_d).to_f # => 0.0

Don’t know how to explain this but we’re lucky to have a test suit for this module because we’d never discover it until it got to production.

UPDATE: Wasn’t aware of it but apparently BigDecimal division is a known bug in Ruby 2.1. You should check out this list for more info.

This concludes the changes we had to make to our code so it runs well under Ruby 2.1. Not much, but is it worth the hassle?

The Effects

We observed three, very prominent, improvements in Ruby 2.1 over 1.9.3:

1. Load times are significantly lower. And by “significantly” I mean about forth of the time. The larger your environment is, the larger the difference of the load time. We were nothing less than amazed by this:

* Deployment time dropped from 14 minutes to approximately 5 minutes. This is due to the many rake tasks we run while deploying. We make about 15 deployments to our QA servers daily. That’s 135 minutes, a little more than two hours saved per day for developers waiting for their version to arrive on the QA server.

* Build time by Jenkins CI was reduced from 14 minutes to about 6 minutes. This has shorten the time from opening a pull request to a successful / failed build notice and made the feedback loop a little more bearable.

* Every run of binary that requires rails environment to be loaded takes now forth of the time. This are the measurements I made on our environment:

Ruby 1.9.3: bundle exec rails runner ‘puts “a”’ 41.06s user 2.23s system 98% cpu 43.916 total

Ruby 2.1.0: bundle exec rails runner ‘puts “a”’ 11.07s user 2.04s system 94% cpu 13.823 total

It saves a lot of waiting time for our developers when running rails server / console and various rake tasks.

2. Garbage collection times dropped from 100ms to almost 0ms. This is our New Relic graph for garbage collection. The vertical line marks the deploy which moved us to ruby 2.1: GC - ruby1.9.3 vs 2.1

3. We had a severe problem on deployments during high traffic hours - we’d just go down from time to time while unicorn workers were restarting. Ruby 2.1 amazingly mitigates this problem since environment load time is 1/4 now and Ruby 2.1 GC is copy-on-write friendly which makes unicorn better handle forking. You should definitely read this article which explains how Ruby 2.x affects Unicorn’s forking mechanics.

Weird Stuff

The only thing we can’t explain yet since moving to Ruby 2.1 is some strange unicorn master process behaviour: When we restart it, it always starts off with a different amount of memory size. As a result, every unicorn restart causes the mean response time to differ in about 100ms. Here are two graphs where the vertical line represents a unicorn restarts: Unicorn Restart - Bump Up Unicorn Restart - Bump UpUnicorn Restart - Bump Down Unicorn Restart - Bump Down

This is the only thing we feel uncomfortable with moving to Ruby 2.1.

You Should Also

Ruby 2.1 has some killer advantages over Ruby 1.9.3. It will make you daily operation a lot faster than it is today and can even help you overcome or mitigate other infrastructure problems you’re having. The changes we had to do to move to Ruby 2.1 are really minor comparing to the benefits we got from it. There is no reason to stay behind - make a step forward to Ruby 2.1.

by Erez Rabih, Head of infrastructure @ FTBpro

From Illustrator to a web font. Creating custom scalable web-icons.

State of affairs: We want to switch from cup-up PNG sprites to an icon font and replace the .PNG sprites. Trying to achieve that with minimum effort and if possible no additional software we searched for a web-app to help us.

 

Research:

There is a ton of software and free or demo apps that can help you get your way and add some nice pre-designed vector icons to your project (like fontawesome). However, being a perfectionist using pre-designed icons did not satisfy me. What I wanted is a way to translate the exact vector icons I developed in illustrator and photoshop into a web-font and that seemed to be impossible without expensive font-editing software.

image

 

Solution:

Then we found icomoon app. This baby can take custom .SVG files, combine them with pre-designed icons from several open-source libraries and export them into a custom web-font. Then the session can be stored and downloaded as a .JSON file, making it possible to edit in the future. All free, no strings attached. This is perfect!

(Note, the font-export button is at the bottom.)

image

 

This is how we do it now:

Now I open a sheet in illustrator, exporting every icon to a separate .SVG file and importing them in icomoon.

image

Front-end developers call every icon by its code like so:

<a class=”prev ficon icon-arrow-left”></a>

And voila, now we have scaleable and custom icons all over the website, fitting mobile, tablet and desktop. This saves us a lot of work in development and cutting PNG’s is now history.

Check them all out here: www.ftbpro.com

— Mark Levinson

Designer at ftbpro.com

Push Notifications Explained

There are two types of users at FTBpro: writers & readers.

Our readers want quality content about their favorite team & league.

When we have content that might interest our readers we don’t want them to miss it.

The writers on the other hand would like to get their content read by as large audience as possible.

Fullfiling both these needs is the essence of FTBpro, one effective way we found to accomplish that is through mobile push notifications.

Recently we started a project to rebuild our infrastructure for sending PN, the rest of this post is about this new system.

image

First thing first: what do we exactly need?

We have 21 apps on the App Store and 21 more on Google Play, whenever we send PN it should arrive to all of them.

Every mobile user is a fan of one team from several leagues we support. 

In addition each user can choose the language in which to consume the content on the app.

Now it’s not our intention to spam our mobile users with PN they may not like, to prevent that users should only get PN with content about their favorite team and written in language they can read.

We need the ability to send both immediate PN mainly for breaking news, and also scheduled PN.

Another key requirement is the ability to customized the message & schedule time of the PN per team & league basis, so if we had a post about a lose of Real Madrid to Barca we would like to send a different message to Real fans than Barca’s.

How do push notifications work anyway?

Generally speaking each mobile platform has its own way to send PN.

For iOS devices it’s Apple Push Notification Service (APNs), for Android devices it’s Google Cloud Messaging (GCM). When you want to send PN you have to talk to these services and they in turn send the PN to the mobile devices.

Both APNs and GCM have their own protocols for sending PN, Interfacing directly to these services can be quit tedious. We use Urban Airship.

Urban Airship (UA) is a service that provide us with a convenient way to manage PN for both iOS and Android.

Every app on any platform makes one logical app on UA, we have 21 of those. Using UA the task of sending of PN is reduced to making HTTP POST request to their api.

One very useful feature of UA is tags, tags are just labels that can be associated with any device. The cool thing about tags is that you can tell UA to send PN to all devices associated with one or more tags.

How do we use it? When a user opens one of our apps he is being registered to UA with two tags representing he’s favorite team & league as well as his language . For example a Chelsea fan in English will be registered with ‘team_4_en’ and ‘league_1_en’ tags. Having the tags setted up this way allows us to tell UA to send a PN only to fans of Barca in Spanish for instance.

UA’s api provides us with two endpoints for sending PN: ‘/push’ & ‘/schedules’.

Making POST request to ‘/push’ will result in immediately sent PN, we use this endpoint for PN that need to go out ‘now’.

POSTing to ‘/schedules’  schedules a PN to be sent at a later time, this feature saved us from implementing a scheduling solution on our own.

Ain’t nobody got time for that

Sending PN to all our apps envolves lots of HTTP requests. Making these requests takes time which our web server don’t have, instead we make these requests in the background using Sidekiq.

Sidekiq is background job processing framework for Ruby, it uses threads for its workers giving it advantage over other frameworks, such as Resque that uses one process per worker.

When sending PN the web process enqueues one Sidekiq job for each one of 21 apps, then on a dedicated server 21 Sidekiq worker threads are processing the jobs. each such job is making appropriate requests to UA api for one app and then updates back the status for that app.

The effect of this setup is that when we send PN it arrives to all our apps (almost) at the same time.

image

Persistence

Shai Kerer had already stated that we strive to use the right tool for the job whenever possible.

We store all the PN related data in one MongoDB collection. each document contains canonical data on PN to one app with embedded documents that includes team or league specific data. an example of such document is:

{
  "app": "aston_villa",
  "post_id": 617803,
  "targets":[
    {
      "message": "Transfer Talk: Tottenham Set to Battle for FC Porto Midfielder",
      "locale": "en",
      "scheduled_time": null,
      "team_id": 17,
      "status": "sent"
    },
    {
      "message": "Transfer Talk: Aston Villa Set to Battle for FC Porto Midfielder",
      "locale": "en",
      "scheduled_time": null,
      "team_id": 2,
      "status": "sent"
    }
  ]
}

Using MongoDB allowed us to store the data as we perceive it and not be penalized by expensive joins.

Conclustion

For the few months the new system is live, it has been working smoothly.

Using Urban Airship saved us both time and effort. It allowed us to focus on our specific needs instead of implementing GCM & APNs protocols for sending PN.

We have introduced MongoDB to our ecosystem that will be utilized in future projects.

So overall, you could say it was a good project :)

Gashaw Mola, Web Developer @ FTBpro.com

Count von Count - A real-time counting database!

FTBpro is all about user generated content. Our articles are written by Football fans around the world. Their incentive for writing over and over again is the exposure they know they will receive. They are motivated by the number of reads, comments, likes, tweets or shares their articles will receive. For this reason, these, and many other counters, are very prominent across our site and mobile apps.

image

Along with that, we started working on a new gamification project. The requirement here is that for each action a user makes on our site or mobile app (e.g, reading an article, writing an article that gets featured, sharing on a social network) - he gets a score. 

This compels us to count many different actions for each user on the site and calculate the score - live.

image

If you have read one of our previous posts, you probably know that we have been dealing with the counting issue for quite a long time.

At the early days of our startup, we used to store the numbers in a MySql database. This meant the number of reads of an article was stored in the article’s table. As our scale grew, and the load of the database was increasing, we moved to another solution: extracting the counting to a dedicated Nginx server. 

In this solution, every time an article was read we initiated a request to our counting server with the relevant parameters. The Nginx server logged all the requests to its access.log file, and we had a script running every minute that aggregated the numbers from the recent requests. After the aggregation, the script updated our main app server with the numbers, and they were saved to the same MySql database.

This is no longer good.

Let’s examine it from the end. This counting system stored the numbers in MySql database. Relational database may be good enough in the simple cases of counting reads of a an article, but how can we store a leaderboard? Or all the countries the readers of each article come from? Well, of course there are some tricks that can help you do it, but it’s much easier to save this kind of a data in NoSQL manner.

We would also like the information to be available live. We don’t want to count (boy, this word appears a lot in this article) on background processes for manipulating our data to the relevant format. 

To sum up, we need a live counting system based on some kind of a NoSQL database. It is also has to be scalable (more than hundreds of requests per second) and reliable.

That’s why we developed Count von Count

It is based on OpenResty, an Nginx based web service, bundled with some useful 3rd party modules. OpenResty turns an nginx web server into a powerful web app server using scripts written in Lua programming language. It still has the advantage of the non-blocking I/O but it also has the ability to communicate with remote clients such as MySQL, Memcached and also Redis. We are using Redis as our database for this project, leveraging its following features: 

  • EVAL command evaluates a Lua script in the context of the Redis server. Lua? again this Lua? Yep, this magical language is supported both by Nginx and by Redis. It is also the language for writing addons for World Of Warcraft. It allows us to write all the counter logic in a lua script, which is preloaded to Redis, and is evaluated from the Redis module of OpenResty.
  • Sorted Set datatype is great for leaderboard data modeling. We extensively use it for storing any kind of leaders data, such as top writers and most-read articles. We have different keys for daily, weekly and monthly leaderboards, and each read action makes an update in all of them.
  • Bitmap datatype helps us count real time metrics in a space efficient way. We use it to count the number of daily active users on our mobile applications. Here you can read more about using it.
  • Ttling helps us clean the database from irrelevant objects. 
  • Pipeline requests speed up the whole thing.

Putting it all together

image

  1. In a different server from the app server we have an OpenResty service up and running waiting for counting requests. We make requests to this server both from our client side and  app server, each time we want to +1 or -1 a counter. Based on Nginx EmptyGif module, we return an empty pixel to each request.  Each request holds the action we want to count and extra relevant params. For example, when a user shares a post, the following request is made: http://<counting_server>/post_share?user=700&post=900&author=15&team=arsenal. Since the server returns a gif, the request can be invoked using <img src=…> html element. 
  2. When the Nginx receives the request, he triggers a very minimal Lua script using the LuaModule. The script just parses the request arguments and evaluates a lua script that was preloaded into Redis. The request is also logged to Nginx’s access.log. All the Redis updates are made inside the Redis script to save connection overhead.
  3. The Lua Redis script is a bit more complex and is responsible for updating all the the relevant keys for the given action. For instance, if we take the previously mentioned post_share action, we need to update the number_of_shares field in the following hashes: user_700 key, post_900 and team_arsenal.
  4. For cases of unexpected failures or downtime, we developed a log player, that “plays” the access.log files and updates the relevant Redis data models.

Using the data

Count-von-Count offers an API for retrieving the data. Since we need to show live counters across our site, we wrote a javascript module that collects all the counters in a page, queries our counter server API for the numbers and updates all the counters on the page. In this way, we always show live numbers, as you can see on our user page and post page

Open Source Project

We’ve put a lot of effort in making this project open source. What and how is being counted is configured in a json file, making it extremely easy to embed this project wherever you need. No single line of code is needed! Check it out at https://github.com/FTBpro/count-von-count

We have been using count-von-count in production for several months and we really satisfied with it. It receives millions of requests per day and thousands of requests per minute at peak times.We use it wherever counting is needed., i.e, Player of the Month Widget , Top writers leaderboard and writer’s profile page.

I wish to give a big kudos to the maintainer of the OpenResty project - Yichun Zang. Yichun is also the administrator of the OpenResty Google Group where you can get lots of information about this powerful project.

To learn more about this project, you can watch this short video from DevconTLV conference.

Happy counting,

Posted by

Ron Schwartz, Software Developer @ FTBpro.com

imageimageimageimageimage

How to manage a multilingual webfont

State of affairs: we have a multi-lingual web-project. We needed a freeware web font, free or cheap software for font editing and a way to do it all from our office, without any third parties.

We started off FTBpro.com with a cool web-font in mind - Days. The font had the most potential and our website’s design relied on it heavily. The font was never tested for language compatibility, up until the point front-end development was ready, and this is what gave us the most trouble.

Here is Days font:

image

 

Problem One—Days does not have the european characters such as À ß Æ Ñ etc.

Solution: I found a freeware software called Type light that has basic editing capabilities and added the missing characters one by one. This is an ongoing process and I still add a letter here and there.

image

Typelight has very limited editing capabilities (version 3.2) but I managed to sketch the needed serifs and combined different letters to make the missing one’s. I made a photoshop test-sheet to check if the letters are rendering:

image

Example: Combining existing letters to get the needed Æ character. Far from perfect, but good enough for small titles:

image

Example: Extending the character 3 to a germansharp s:

image

 

Problem Two—missing characters change the font of a whole word. 

When the Days font was rendered in a sentence with a missing character, the whole word would render in a default font (Times New Roman) which looked something like this:

image

Solution: define a (standard) fallback font similar to the title font: in our case  Helvetica and Arial Bold did the job.

font-family"OpenSansCondensedBold","Helvetica-Bold",Arial,sans-serif;

 

Problem Three—the font has enough characters in OTF but they are not rendering well when put on web.

To be exact this is a problem with conversion. Fontsquirrel web-font generator has many options for converting a font to a web-font kit, and some of them remove characters like the german sharp s (ß). Another problem we were having is with kerning (space between individual letters).

image

Solution: Finally it took us quite a white to get the web font to render well. Our own Alon Idelson went out for a hunt and found this online web font converter: http://www.fontconverter.org/ which made the job well. Fontsquirrel webfont generator did not do a good job, in case you wondered.

 

Conclusion:

Web fonts are definitely the future of web typography. There is already a ton to pick from. While using a freeware font is seductive, the consequences can be difficult to handle. Make sure you know what languages you are going to deal with in your project in the future, inspect the glyphs (characters map), and make sure the font you are using has them. 

Posted by:
Mark Levinson, Designer @ FTBpro.com

Be Proud Of Your Commits

In FTBpro.com, we have a nice procedure of merging new code to our master branch. We use GitHub’s neat Pull Request (PR) feature to gain two big advantages:

  1. It is easy to see the commits and the files changed in the branch compared to master. It is a very good tool for code reviewing.
  2. Our Jenkins CI server automatically makes a build of the merged branch to verify all the specs run, and marks the PR with success/failure accordingly.

I want to extend the first point a little bit more.

Going through code review is not always an easy experience: It may have taken the programmer days or weeks to produce this pull request. A lot of effort was made to meet a deadline or even free hours spent on finishing the job, and now his/hers PR is not approved due to lacking code quality. So what a developer should do before submitting a PR to be reviewed by his peers?

Look at the the diff and ask himself one, very simple but yet powerful, question:

Am I proud of the code I delivered?

If I were looking at this diff, would I think it is an excellent code? Once this question is positively answered you know you made your best to submit high quality code. It may still get some rejects or corrections from peers, but that’s only natural - the more eyes, the better your code gets.

Being proud of your code is very subjective and changes from person to person, of course. How can you really know that the code you’re proud of, will be appreciated the same by someone else’s point of view? You can’t, but you should endorse yourself a set of rules which makes you feel good about the code you write, and try to apply them every time you submit new code.

I’m going to present you with my set of rules. Some of them may fit you, others may not, with some you may agree and with some you may disagree. This is what makes me feel good about the code I write.

Working Code Is Nothing To Be Proud Of

I think that the strongest thought that sits in the back of my mind while writing code is this one. Once you change your state of mind from writing code that works, to writing an excellent code that works, you can never go back. You always seek where you can improve it so that it stands out as an excellent piece of code.

Think about it this way: Every 15 years old boy can probably submit a code that does exactly what you are trying to do. So what makes you, a mature and experienced developer, better? What knowledge do you have that this boy does not, and how is it applied in your code? What makes your code above average?

Be honest with yourself answering these hard questions and make sure you have an answer to at least one of them when submitting the code.

Become A Writer

Grady Booch, author of Object-Oriented Analysis once said something very powerful: “Clean code reads like well-written prose”.

All modern, high-level programming languages strive to allow the developer write code that reads like well-written prose. From all the languages I’m familiar with, Ruby wins the prize in this category. You should always strive to write code that reads like plain English. Hide the implementation details as far as you can. Use meaningful yet simple names for you classes, functions and variables. Before submitting a PR go over the code you’ve written and make sure it reads like well-written prose.

Believe me, when you go over your own code and it looks like plain English, the satisfaction will keep you going this way forever on.

Test Your Code

Tested code has its obvious advantages. Once the test suite is done, refactoring is a piece of cake. Extending the module or class is easier since the developer can be sure nothing old is broken. It is even a good documentation tool since reading the test suite must convey the purpose of each public/interface method you present.

Apart from these obvious advantages there’s another one which lays underneath the surface: Writing tests is a process in which you, the one which creates the system, takes the role of the client which uses it. Seeing your code from the client’s perspective is a whole different matter. You get insights that you couldn’t get in any other manner. Suddenly, instead of thinking “what would be the easiest way to implement this feature” you start thinking “what would be the easiest way for the client to use this system” and that makes your improve your interface, method naming and code modularity.

Be sure to test your code before submitting it. It will definitely make it better.

Refactor When Possible

Often we alter or extend a module that was not written by us. Most of the time we’re thinking to ourselves: “If only I were the one writing this code, it would be 10 times better” but the timetable and deadlines make us give up on refactoring it. I know that an overall refactoring process can take days or even weeks for large modules, and there isn’t always time to perform them. But even in large modules like these, small, precise refactorings can make a difference: You can change a method or a variable name to a more meaningful one, eliminate small code duplications, extract a few code of lines to a well named method or even put one break statement that will save us going over a whole collection when the object we wanted was already found. All of these does not require much time, and affect only small portions of the code.

Always leave cleaner code behind you.

These are the things that makes me proud of my commits. When I apply them I feel like I contributed to a better, more maintainable code base.
The feeling of being proud of your commits is an addictive one and that’s a great thing: Once you’re there, you can never go back.

Now go, make yourself proud :)

Like this post? You should definitely check out this one.

posted by Erez Rabih, Backend Developer @ FTBpro.comimagepainting by Nila Ward

Dont do_something and return !

I never understood why I dislike the Rails approach to do_something and return. I always wrote return do_something. This is how I used to develop Rails since 2005 and only saw the do_something and return approach in the last year.

"That’s the Rails way!" they said, "eveybody’s doing it!" and actually it does look nice and more or less like proper english.

Lately we’ve seen a bug here in FTBpro.com that helps me justifying my dislike.

We had this code in place:

def show
  set_obj_by_id
  show_rss and return if rss?
  render_via_phantom(cache_key: MemcachedKey.for(obj, locale)) and return
end

and for some rsses we saw render_with_phantom errors. Refactoring to this style helped:

def show
  set_obj_by_id
  if rss?
    show_rss and return
  else
    render_via_phantom(cache_key: MemcachedKey.for(obj, locale)) and return
  end
end

How could that be, isn’t it just the same logic just styled differently? Well, of-course not. How does “and” works?

Let’s look at the method Koko

def koko
  nil and return
  return true
end

It will always return true because “and” never evaluates the second parameter if the first one is nil/false.

We went to look what happens in our render_rss method and a Voilà - we have the same pattern there:

def show_rss
  @posts = PostsRssFeed.for(obj, locale, params)
  @show_full_text = params[:text] == "full"
  @rss_title = obj.name
  render 'singlepage/shared/league_team' and return
end

Either Rails render method returns nil and its return isn’t called or the return just returns nil so show_rss certianly returns nil and the show method does “nil and return” that don’t trigger the return method and moves on to the next line.

#FAIL

So one would suggest using the pattern do_something and return when you’re perfectly sure do_something returns a value that does not evaluate to false but its an assumption that can’t be implied by reading the code and thus it should not be used.

If you have to use return use “return do_something” - but it’s usually (not always) better to use simple if condition like we demonstrated above.

Posted by:

Dor Kalev, CTO @ FTBpro