Burrst.com: A glimpse under the bonnet

One of my very respected web developer friend, Anthony Blackshaw (Getme Ltd) has released earlier this month a brand new way to publish and share short pieces of fiction written by anyone who wants to have a try at writing (or already is an author!) and is seeking constructive feedback.

Content quality is ensured on a registration process by approval, where each first fiction piece is reviewed by a moderator before according the 'publisher' account type to the registrant.

On the same lines a Dribbble.com, Burrst will maintain high quality content by selecting only the worthwhile. Looking back on the success on Dribbble, I'm sure this will have positive effects on the website.

Being a web developer and therefore very curious about what is powering this site, I have collected from Anthony a few things that I will share with you now.

So, what is Burrst.com built on?

Burrst is built on Tornado, a lightweight Python framework which has the unusual advantage of being an asynchronous web server as well (great for live notifications which we'll discuss further down).

But why Tornado and not Django for example?

Well, Tornado is lightening fast (hence the name) and Ant like his applications to be fast (we all do really). So by being lightweight and well written, the result are staggering: under 300ms page load and under 500ms for a fully loaded home page, that's just about the time to see the loading spin appearing in your browser window.

Speed is such as an important factor in application success, users tend to get frustrated after 1s an will probably leave at 4s, the whole user experience starts with speed, and more generally, responsiveness.

Template engine

Tornado is equipped with a lightweight template processor which works by inheritance (in the same way as django does) but don't allow includes. You can run raw Python in the template but that may not be the best practice, keeping the logic away from the view lead to leaner code.

Form management

This is where things start to get tight. The built in form system is limited and is likely to be insufficient to build public apps, where a high level of validation, security and variety of fields are required. So Burrst has turned its eyes towards wtForms, a much more complete package that plugs in easily in Tornado, providing it with the necessary form tools, considerably diminishing the amount of development work.

Database handling

Tornado comes with a MySQL database handler but speed is not its forte. So it has been a wise decision to turn to Storm, a super fast database handler. Storm has the advantage to bundle up a series of queries before applying them, increasing processor efficiency, and also provide query result caching for accelerating those recurring query tasks.

One downside of Storm is that its developer haven't completed the documentation, which that you will have to dig in the source to really get the best out of it.


One thing that Tornado does not cover is a search engine. So Woosh have been implemented for its ease of use and fast search.

Just like creating a model in Django, Woosh allow the creation of search index objects where one can specify which attribute they want to feed in the index for later being queried against.

Index generation is not automatic so a recurring job must be in place to regularly update the index with new content. Alternatively, it could be updated when a user save new content but that may lead to an extra delay in page processing, unless you are using a Python queuing process so it won't affect the immediate page load.

Overall it may also add on computer cycle which can be optimise by having a single global update during quiet times. I have recently integrated Mixpanel for application event tracking, and they strongly advise to use a python queuing processing to send tracking call without delaying the page load.

Cloud server

Burrst is running on the very famous Amazon EC2 platform, using a micro-instance (yes, micro!) to render an average of 35000 pages a month without a problem, thanks to the efficiency of the application. The instance offers 613MB of RAM and run on a 64bits core. All media files are hosed on the S3 CDN, once again lightening the load on the micro-instance.

From idea to product

The process of delivering Burrst has been surprisingly fast, fitting is today's modern laws of velocity where delivering small but fast and adapting along the way to the user needs is key to success.

The project has started on January 1st, 2012, from Anthony passion for writing fiction and not having an exclusive, content quality checked website to do so. It was time to offer a service that bring good prose to highlights rather being buried in a pile of mediocre work.

Only 6 months down the line, after a regular dedicated amount of hard work, Burrst was ready to hit the wild at the end of June 2012.

And it's been an impressive start with already 131,283 words burrsted and nearly 200 members within the first 3 months.

The project has been a success so far, I believe first for its concept but undeniably propelled by the quality of the service delivered, in terms of content, usability and speed.

< / >