Friday, May 16, 2014

Default serializer in play framework for Json responses

In playframework (Scala), typically in a controller when you want to render json responses, you do something like:

def index = Action {

Looks simple, but the problem is the Json.toJson repeated in all the controller cannot be pulled to a common method, because of the different response status codes like Ok or Created etc.

A better and less verbose way would be specifying the concern of converting to Json in a common place (like ApplicationController or somewhere).

One way of achieving this would be using scala implicit methods, just by defining this in the application controller:

implicit def writeableOfAny(implicit codec: Codec): Writeable[Any] = Writeable(data =>
  Json.toJson(data).toString.getBytes, Some(ContentTypes.JSON))

By this, we are instructing playframework to use this method as the implicit way to convert any object to writable stream, which is sent back as http response.

With this, the action would simply be:

def index = Action {

and actions dont the logic of converting to json anymore!

Tuesday, May 8, 2012

Precompiling handlerbars tempates with Rhino.js

Handlebars is awesome templating engine, which can be compiled to html by a javascript engine.

With handlebars, it is also possible to precompile your templates and it results in a smaller required runtime library and significant savings from not having to compile the template in the browser. This can be especially important when working with mobile devices.

Handlebars provides precompiler script with node.js. But for some reason, if you cannot have node.js in your development box or if you already have Rhino as part of your stack, you dont have to introduce node.js just for this. 

I came up with this port of handlebars precompiler with Rhino. You can get it at

And its usage is: java -jar rhino-handlebars-compiler.js --handlebars --templates --output

and here is a sample ant task to trigger it as part of your build process.

<?xml version="1.0"?>
<target name="precompile-templates">
  <java dir="${basedir}" jar="lib/rhino-js.jar" fork="true" failonerror="true">
    <arg value="web/js/lib/rhino-handlebars-compiler.js"/>
    <arg value="--handlebars"/>
    <arg value="web/js/third-party/handlebars.min.js"/>
    <arg value="--templates"/>
    <arg value="web/templates/"/>
    <arg value="--output"/>
    <arg value="web/js/compiled-templates.js"/>
  <echo>Template Precompiled to web/js/compiled-templates.js</echo>

Friday, January 21, 2011

Testing your chef cookbook while development

While developing chef cookbooks, it is really useful to run it and test its actual behavior and rectify the mistakes to match the expected behavior. Also the test needs to be performed across various platforms that you cookbook is supporting.

This can be done easily by running the cookbook with chef-solo against any environment which can be easily constructed and teared down. It is really important to bring back the test environment to its initial state before running the test again to avoid unexpected behaviors.

Amazon ec2 instances can be used for these purposes. You can instantiate an instance, run the cookbook, verify and then terminate the instance. And this is repeated until the development is complete.

What I use is a set of local virtualbox images for various platforms against which my recipe has to tested. I maintain these images with proper snapshots. So I have a snapshot taken before running the script basically in a clean state. After running the test, just I revert to the initial clean state so that my next run is not affected by the artifacts of the previous test runs.

In addition to the cost savings, it is much faster to revert to a snapshot in vmware/virtualbox than terminating and re-instantiating an ec2 instance. The snapshots can be taken in a live(running state) so that you can totally cutdown the boot time on every test run.

Sunday, November 29, 2009

Inmemory mysql database with InnoDB engine

In a typical MVC based application, for unit testing of models, should we allow the tests to hit database? Anyway its a different discussion and I am not going to talk about that here. If you allow any test of that kind to access database (you can call that as unit test or what ever which you are comfortable with), your test suite is probably going to take much time since we are now limited by the disk I/O speed of database.

If you are using mysql as your database server, you can refer this comparison chart to choose the engine which best fits your needs. So mysql provides inmemory engines, but the downside of those is that they does not support transactions and foreign keys. InnoDB supports the both mentioned, but it operated data only on disk and its relative disk usage is higher.

So you if you want a mysql engine which provides the features of InnoDB and also operates from RAM so that you can make your tests faster, a trick can be done. A part of RAM can be mounted as disk and that can be used as data location of mysql server. I got this idea from one of my co-worker Gavri Fernandez long back. By this way, we are simply zero downing the disk I/O time and all the read/write operations are going to be super fast.

Obviously this can be done only in nix based operating systems.

These are the steps to use ram disk to speed up your innodb mysql database.

1) Create a ram disk. This differs for each OS and these links provides information about creating the ram disk for Mac os x and Ubuntu/Redhat.

2) Mount the ram disk to your prefered location. The data location of mysql in ubuntu is /var/lib/mysql. This is configured in the mysql server config file. Again the location of config file and the data dir can differs from os to os.Change the data dir location to the mount point you just created. Copy the contents of the original data dir to the mount point.

3) Usually mysqlserver runs with its own user called mysql and hence the file permissions of the data directory. So change the owner and group of the mount point to mysql:mysql.

4) Now start/restart the mysql server to make the new configuration effective.

Note for ubuntu users: I had tough time in ubuntu to make mysql to use the new location because of apparmor, So finally i had to disable apparmor for mysql to make it to run. As the machine was my local dev box its not an issue. If anyone find success with apparmor, i request them to ping me about it.

If you are not able to start the server, syslog should give an insight about what is happening behind.

Thats it. Now all the database opearations are going to be super fast. This is suitable to for running tests which hits database, since we dont care about the data even if we lose.

Monday, June 1, 2009

VMWare ESXi - Moving vmware images between esxi servers

VMWare ESXi is the free hypervisor from vmware which is to be installed on the bare hardware unlike vmware server which runs as an application on another server grade operating system.

Even ESXi installs a very thin unix based OS behind the scenes. But it is part of the esxi installation itself, so it is not transparent to the end user/administrator on installation process.

The minimal OS have shell, vi editor, even ssh, ftp servers, etc. But they are not directly accessible for administrators. Even for adding a new vm, the officially supported way is to use combination of vmware infrastructure client, vmware convertor, etc.

But the very important thing is *SSH can be enabled and the vmware image can be scp ed from one esxi server to another or from any other machine to a esxi server*.

To enable ssh or any other services,
* From the server console, press Alt + F1 and type "unsupported" (without quotes). Note: When you type. it wont be echoed in the standard output!!
* Enter your esxi root password.
* Now you are in the shell!!
* Edit /etc/inetd.conf with vi editor. (Sorry nano fans, it is not available here.)
* Uncomment any of the services you want to enable.. ssh/ftp
* After editing, you need to send HUP signal to the inet.d process, for which the instruction is present head of the same file /etc/inetd.conf.
* Now press Alt + F2 to come back to the server console.

Now you can ssh into the server or scp vms to the server. After scping they can be added to the inventory from the infrastructure standalone client/vmware cli. But i haven't tried cli.

Another way to copy/move vms between vmware esxi servers is to use the vmware convertor and choose source and destination as the corresponding esxi servers. With this option, you get to enjoy additional functionalities like changing the vm properties. But that can be done any time even after moving with the help of standalone client.

Happy virtualization experiences..

Saturday, October 25, 2008

Rails 2.0 - named_scope rocks..

In Rails 2.0, ActiveRecord has a feature called named scope, which makes things easier..

With named scope, you can give a name to scope or condition..For instance, if you have a model called "Comment" and say it has a database column called published, published_at, then the model with namedscope definitions:

    1 class Comment < ActiveRecord::Base
2 named_scope :published, :conditions => {:published => true}
3 named_scope :limit, lambda {|count| :limit => count}
4 named_scope :in_last_ten_days, lambda{:conditions => ["published_at > ?", 10.days.ago ]}
5 named_scope :order_by_published_time, :order => "published_at"
6 end
Comment.published - return all the comments which are published.
Comment.in_last_ten_days - returns all the comment objects which has pusblished_at time within last 10 days.
Comment.limit(10) - returns 10 comment objects.
Comment.order_by_published_time - returns all the comments in chronological order.

They just behaves as find methods on the receiver with options just as we pass to the find method..

But the specality about named_scope is "Named Scope can be COMBINED and it will FIRE ONLY ONE SQL QUERY with all the options combined"

Comment.order_by_published_time.limit(5) - returns top 5 comments in chronological order

For this operation one query is fired with ORDER BY and LIMIT clauses!!

With named_scope, we can also combine our own find method, like
Comment.order_by_published_time.find(:all, :condition => {:published_at => Time.parse "14 Jun 2008 15:38:06"})

Again, say we have another model called News and it has "has many" relationship with comment,

    1 class News < ActiveRecord::Base
2 has_many :comments
3 end
Then we can call the named_scope of comment for array of comments which are related to a news object, like

news = News.first

For this operation also, only one query is fired for finding all the comments of that news which are published..

So named_scope can also be called for associated children objects of a parent model!!

Friday, August 15, 2008

Windows Vista cannot obtain an IP address from certain routers/DHCP servers

A month ago I bought a Dell Inspiron 1525 which has Windows Vista Home. I was surfing internet using ethernet and every thing was fine... Today i tried to connect to a wlan network... I was able to see the wireless network and i am also connected to the network.

Then comes the surprise!! I am not able to ping my peer machines in the network... Then i starting troubleshooting... The IP address assigned to my machine is unusal and that is not the one assigned by my DHCP server!!.. Then started googling and found out that by default in Vista, the BROADCAST FLAG IN DHCP DISCOVERY PACKETS IS ENABLED... So it is not able to obtain ip from certain type of routers or dhcp servers..

To disable it, (as usual there is no check box or some thing in network options.....but) you have to add an entry in registry...
  • Open regedit (run > regedit)
  • Open HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\
  • Select the GUID that corresponds to the device for WLAN (I found the guid in my machine with the wierd ip that vista assigned itself for wlan adapter)
  • add an entry with DWORD as DhcpConnEnableBcastFlagToggle and its value as 1.
  • or add an entry with REG_DWORD as DhcpConnForceBroadcastFlag and value 0.
  • Thats restart your machine and you are connected to your network!