Monitoring Using Mobile Devices – Rants

Just enough monitoring to ensure the health of the product. It shouldn’t be a replacement for SNMP or WMI. Requirements of such a system are

  • Real time monitoring of critical subsystems.
  • Simple and few number of interfaces for effective monitoring. (not SNMP or native monitoring and management agent)
  • Notify the user using device notification, SMS or email.
  • Extensible API for plugging in future products.
  • Interfaces for registering other products.

Architecture considerations

Embeddable monitoring agents where each products ships with its own agent. One of the advantages with this approach is not having to maintain any sort of dependencies with the monitoring agent itself. Users can register their devices with the monitoring agents. There are a few disadvantages with this model.

  • Not all organisations allow personal devices to be connected to the office network
  • Monitoring will be limited to the time the device is connected to the network which will make it not very useful.

Ideally we should be able to deliver notifications to the device directly through cell network. Since Apple allows only their Apple Push Notifications Service to be delivered messages like this and they expect each applications to register with the APN, we may have to have a central notification server which can aggregate messages from our customer premises and then forward them to Apple Push notification service. An aggregation server will have to perform necessary authentication and authorisation to ensure that the users requesting for the notification are indeed allowed to receive messages. Organization Application monitoring app on the device can perform these operations before registering with APN.

Custom User Table with Laravel 5.2

I got introduced to Laravel a few months back while searching for an MVC framework in PHP to build web applications. After playing around a bit with CodeIgniter and Laravel, I decided to proceed with Laravel.

I was blown away by the simplicity with which I can create my data model and seed the tables for initial development. It was quick and easy to pick and get going. It started getting a bit complex as I started using my own user class and tried to combine it with the Authentication model that comes with Laravel.

Given below is a quick summary of changes required if you need to use your own user table with the Authentication framework.

    • Follow laravel instructions to scaffold all routes and views needed for authentication
    • Update your model (default is User.php) to indicate your primary key.  Since I don’t use the default id, it looks something like this in my case

    • You may also want to add the following if password columns is also renamed.
    • Update config/app.php to tell Laravel, how users are retrieved. In my case, I use my_user_table instead of default User table.

    • Its bette that you leave the app/Http/routes.php as it is until you build rest of your business logic.
    • Add the following in app/Http/Controllers/Auth/AuthController.php. This tells the Eloquent model that we need to use username instead of email as the unique identifier for login

  • Update resources/views/auth/login.blade.php so that a variable called username is requested instead of email.

One caveat is about the error logging. Laravel does not have enough log messages to track down problems. Mostly exceptions thrown in the lower level are caught and handled internally which defeats the purpose of exceptions completely. I almost spent two days to figure out the variables needed as the login page kept coming back without any errors. It would have been much easier if there had been an error message before the exceptions were thrown.

Prepare Yourself for the Big Change

This is part 2 of my 3 part series where I took the important decision of leaving a high flying corporate life and joined a starup. I mentioned various fear factors in my last article.

Having decided to quit, next task is to prepare for the change. Good management principles teaches how one should plan for his or her exit or growth. Ideally people in the leadership roles should be developing an exit strategy from the time they take a position, its an integral part of career planning.

Plan Your Successor

Nobody is indispensable in a work environment.

First thing to do is delegate responsibilities and become invisible over a period of time. I planned and introduced my team to customer engagements that I’ve been handling, inter team interactions where I had to take decisions and other organization level activities. Before I communicated my decision to leave, I mentored and ensured that my team can independently handle all the activities that I was responsible for. This is another management principle where one become invisible or irrelavant so that he or she can move to the next level.

Its always a challenge when you have to take up new responsibilities during your notice period. Most times, it will be simple enough where one needs to use the knowledge and experience that he or she amassed over a period of time.

End of the day, you need to leave lasting impressions. Make sure that your work and relationships in the organization are not impacted, instead the respect for you goes up.

Plan for Your New Life

This was easier than I thought. Once your financial status is clear, next set of tasks are:

  • Build your network and keep the contacts live
  • Update your linked in and let your connections know your plans. This will come handy in future
  • Aquire machines and software you need if you are heading to a startup.
  • Get cell connections, health insurance and such essentials. It will be difficult to manage these once you become nobody and when you don’t have a salary statement to produce.

Overall, my exit and transition to the new phase were uneventful. I just didn’t have breathing time as the work started with even more agression on the next day itself. More about that later.

Follow Your Passion

This is part 1 of a 3 part series where I discuss my decision to leave Novell, how I prepared myself to take the big change and how I coped with the new environment.

13th February 2015 was my last working day in Novell    It was a hard decision considering that I was associated with Novell for 18 years.  Friends and colleagues were even more puzzled with my decision of joining a startup since I spent a good part of my career in corporate world.  One question I was asked often in the last few weeks is about my feelings.

Are you worried about leaving?  Aren’t you scared that you won’t have the paycheck going to your account at the end of the month? etc. etc.

Many of the fears were very eloquently discussed in a 4 part series by Ravi Vekaresan  ex Microsoft MD.  One will notice a lot of similarities while dissecting each of the points that Mr. Ravi called out in his articles.  It was sort of reaffirming my thoughts that I can survive.  Many people did this.  If not now, then when?

I borrow a few statements from his articles as it was no different from what I went through and what my friends asked me.

Fear of being nobody

Probably this will be the far most important issue one has to deal with if you are in the leadership role.  You will have to deal with an emotioanl imbalance suddenly when no one is waiting for your approval.  You feel dejected when you don’t have to douse the fire. You are worried if your TRP comes down over a period of time as you are not quoted in the media that often.

Dealing with loss of high flying lifestyle

As a corporate executive, one will enjoy a number of benefits.  High flying habits such as flashing your loyalty cards where ever you go, the grin on others face when you are allowed to board the aircraft ahead of other unlucky souls because of your medallion status,  smile on your face when you see your name flashing on the TV screen as you enter the full service Marriot room etc. are a few of the unwanted habits that you acquire during your corporate life.

Money Matters

You will get the picture if you are a salaried employee for a couple of decades.  Though it makes a lots of difference in the initial days of the career, as time progresses, one takes it granted and stays with the comfort of a handsome amount getting deposited into your bank account at the end of every month.  A lot of planning is required when you decide to let that go.  Commonsense says that one needs to plan for his 18 months of survival  at the minimum before you take the plunge.

Social status

Definition of social status has changed from your community where you live knowing you to being known as a thought leader in the industry.  It has become your LinkedIn rating or number of twitter followers or number of google search results.  I’m not sure if we are really bothered about your immediate community anymore.  It could be an important factor for a few.

Having decided to quit, decided to let go all the comfort and financial security, next step is to prepare yourself for the big change.  I’ll talk more of that in the next post.

5 Must for a Cloud Computing Job

I was recently asked to list 5 must for a cloud computing job. It can greatly differ for roles and kind of industry that you work for. Innovation and rapid delivery is a must for a software developer or architect while managing virtual environments and data management skills are necessary for a data center manager. However, I think there are a few common factors that will be applicable for all

Understanding of IT as a Service

One of the advantages for cloud offering is the service model or the shopping cart kind of experience where IT managers shop for the service using their credit card instead of building it themselves. IT managers have to transition from owner of the service to provider of the service. They have to give out their control and focus more on negotiating service quality with appropriate vendors. Prospective candidates transitioning to a cloud domain must understand cloud categories and deployment models. Understanding the business beyond the technologies and identifying ways and means to fulfill business requirements to maximize the productivity of their end-users is a common trait they should demonstrate.  Analyst Peter Christy at the Internet Research Group refers to this as  inversion of enterprise IT from an application-centric to a people centric structure.

Managing Virtual Environments

Be it a developer or an IT manager, at some point in your transition time, you will have to deal with virtualized environments.  You should be proficient in designing and managing IT infrastructure using hosted services, managing policies and configurations within private or public cloud as needed.  These skills will help an IT manager to negotiate their SLAs correctly and realistically.

Innovation and Rapid Delivery

Probably one of the most appealing aspects of the cloud computing is the agility with which one can realize their solutions. This is particularly true for developers as well as IT managers. They can fulfil business requirements via innovation and integration of cloud services with agility. Gone are the days where one raise a purchase request and wait for the approvals and subsequent delivery of hardware and software.


One of the drawbacks of not having complete control over the services that you offer is security. This can be a daunting task if the services are offered from a public cloud. One should know how to manage security and compliance and should be familiar with various compliance requirements across verticals and locations. Good understanding of applications delivered from the cloud, the way users accessing them and security implications of the whole model will be a must have trait for a cloud engineer. Analyst Mark Diodati at Gartner says that the shift to the cloud and the consumerization of IT have complicated the task of identity and access management in the enterprise security environment.  Federation protocols, OAuth, REST, SCIM, BYO{D,I,A} etc are a few keywords that they can research on.

Data Management Skills

Most organizations will have a hybrid cloud approach where they will have data managed from on-premise and off-premise. Most medium and large enterprises prefer to keep their critical data such as identity and IP related information on-premise. Candidates opting for a cloud computing job must possess necessary skills to design systems to manage and integrate data from off-premise to on-premise. This would also include ability to get deeper understanding of data through analytics and management through big data technologies.
This is no way a comprehensive list of skills. I believe the above items are the intersection of skills and technologies that every individual must possess before adopting cloud technologies or services.

Elastic Map Reduce – A Practical Approach

Amazon just reminded me that my AWS free tier is getting over tomorrow.  I’ve been wanting to write about my EMR experiments for some time.  I worked on this a couple of months back when I got a chance to experiment with Hadoop.  We used twitter feeds at that time.   My objective was to run the same with a large log file from one of our products.  I’m going to explain the way EMR can be used in a very basic way by using data stored in S3 and scheduling EMR job with a bunch of scripts.

As usual, I’m going to build the whole experiment over a number of steps.  I believe that it is easier to validate your approach in smaller steps like programming.  It is always easier to test your program as you build it instead of trying to see how the program works after a few hundred thousands lines are written.

Step 1: Upload your data and scripts

pigbucketI’m going to use Amazon S3 as the storage for this example.  There could be other methods.  I think S3 is a good option for up to a few gigabytes of data.  As you notice, the pigdata
pinginput bucket pigdatbucket  has all the input and output data folders for this example.

The objective of this exercise is to do a sentiment analysis on a number of tweets from various states in USA.  The result will be placed in the folder output once the EMR job is completed.

Step 2: Crate  EMR cluster

In this step, we create an EMR cluster.  To start with, I leave logging on and use the S3 folder Logs as the place holder for log files.   I always find logging helpful to troubleshoot teething problems.  I disabled Termination Protection as I couldn’t sufficiently debug script issues when I enable this feature as the cluster terminates automatically.

Amazon provides hadoop 1.0.3 or 2.2.0 and pig (as of this writing). This EMR cluster will be launched in one of your EC2 instances or a VPC.  Select the appropriate instance based on your subscription level.

As this example needs only basic Hadoop configuration, this was selected for the Bootstrap Actions. The core of the setup is in the next step where you select the Pig script that you uploaded as the starting Steps. 



Notice the S3 locations in the above image.  Select the files from the appropriate S3 folders.

You will be able to monitor the running cluster from your Cluster List once  the cluster is created.  clusterlistSelect one of the clusters to view the status and other configuration details.

This example just uses a basic pig script which I modified for pig that Amazon provides.  You may have a need to call an external program from your pig script to work on the data.  Amazon provides a way to upload additional jar for this purpose.


I would recommend testing your pig script locally on a test data before uploading to EMR.  EMR takes a while to get started and produce the output.  The cycle repeats if there are any errors.  I used Hortonworks Hadoop VM for testing my data and scripts. Hortonworks provides the entire Hadoop stack as a preconfigured sandbox which is very easy to use.  This sandbox also includes Apache Ambari for complete system monitoring.  They have a number of easy to do tutorials for anyone to get started quickly on Hadoop, Pig and Hive.

The initial data and scripts for this example came from Manaranjan.


Working with Big Data

I must say that the title is a little deceiving if you are looking for a technical post on Hadoop and Pig.  In the Making Sense of Data course, Google says that the data process consists of three steps

Prepare >> Analyze >> Apply

I’m going talk about is the first step, ie. Prepare.

Couple of weeks back on a Friday afternoon I got a call from one of my Friends in IIMB.  She sounded a little worried. The problem posted to me was to extract meaningful information from 100GB data that they just received from one of the retailers in Bangalore for a CLV or Survival model building. She had no clue what is there in the zip file that they just received as trying to open or copying the file itself is a time-consuming operation.

Step 1: Identify the data

Immediate task is to identify what contains in the 100Gb file.  I like some of the handy unix utilities in such situations.  For eg head and tail to get quick peek at the data.  I think Windows PowerShell has a handy API called Get-Content -totalcount n where is the number of lines you would like to see from the file. I figured that the data is nothing but an SQL dump from an Oracle database which kind of explains the size of the file.

Next task is to look at records.  Since it’s an SQL dump, each record has name of columns and it was easy to identify that each record has 70 columns with a mix of data types. Using wc, I also figured that the file has 67 million records. Objective is to extract data from this 67 million records.

Step 2: Preparation for Data Extraction

I explord the following options to extract the data:

  • Import the data into a db so that I can run simple SQL queries and retrieve data as and when required.
  • Parse the data and convert them to csv format

I chose the sql option as that gives more flexibility and mangeability than csv format. Challenge was to recreate the database schema from the records as the customer didn’t give us the schema. We can identify the data type from the columns, but it was difficult to judge the constraints as the number of records is more and there is no way I can validate the schema against the data file. Anyway, I configurd a mysqldb on my late 2011 MBP and created a schema. MySQL Workbench is a handy tool to explore the database and make minor corrections.

I extracted a test data with 1500 records (thanks to head) for validating the schema and quickly realized that I will have to clean the data as there were extra characters in some of the records. So, using the command line mysql tool was ruled out as I had to do the data cleansing in line with the import.

The easiest option was to use a python script. That threw up another challenge as there is no default mysql db connector for python. It turned out that installing mysql database connector for python on Windows is easier than OSX though I finally got it running on OSX.  Writing a 40 line python script to read the data, validate columns and write to db was easier than I thought. It took a few iterations to identify schema constraints violations and data issues before I could completely upload the data. I must say that it took around 5 hrs to upload 67 million records on my MBP that produced 30Gb database file.

Step 3: Data Extraction

Once the data is in the db, it is easier to extract the data through simple SQL queries. We wanted to find out buying patterns for 3 different items. So, I created indexes for certain fields so that the searches will be much faster. We were able to run the queries from the MySQL WorkBench and store the results as CSVs for further analysis in Excel or SAS.

It was a good exercise  to play around with this real life data and figure out how to handle such large data in a reusable way.  It is also a learning that a good data scientist should know end to end methodologies and technologies as one might spend a good amount of time in preparing the data.

Experimenting with Oracle Virtualbox

I have been using VMWare Fusion on my MBP for a while.  I noticed significant performance issues after upgrading to Mavericks.  That is when I decided to try out Oracle Virtualbox.  More importantly some of the devops I was trying such as Vagrant and Docker did have readily available VMs for Virtualbox.  I never bothered to checkout Virtualbox in the past as I owned licenses for VMWare Fusion and VMWare WorkStation.  Staying with VMWare was more productive as I can move around VMs between my development environments and Office work environment.


The first step was to getting all my existing VMs running on VirtualBox.  I must say that running my SLES and Ubuntu VMs were easier than I thought. All that I need to do was create a new instance and use the same vmdk image from VMWare.    By default VirtualBox will use a SATA/SCSI interface for the disk image.  It worked well for Unix/Linux virtual machines, but for Windows, I had to forcefully use IDE interface.  Do the following for Windows (I tested with 7.x and 8.1) images.

  • Once the VM is created, goto settings and Storage
  • Delete the SCSI instance associated with your vmdk file
  • Add an IDE interface and choose the same vmdk file.


The next configuration required is with respect to Networking. I normally use a NAT’d environment with specific CIDR for all my development VMs. I can access this private network from my host on VMware WorkStation or Fusion.  It appears that only way to access services running on Virtualbox image on a private interface is through port forwarding.  Even to SSH to to guest OS, you need to forward a host port to 22 on the guest.  Thankfully the network configuration dialog in the VM settings provides an option to do that.  There is an experimental NetWork Address Translation Service in VirtualBox.  I haven’t been able to get that working on my OSX yet.

Shared Folder

Shared folder concepts are a little convoluted on VirtualBox. Apparently they disable the ability to create symbolic links in a shared folder due to some bizarre security reasons.  You need to enable them manually for each shared folders in each VMs.  More importantly, you need to restart the VirtualBox application after enabling them.  Given below is the syntax for enabling the creation of symbolic links on a given volume.

The SHARE_NAME at the end of the parameter should be a full path to the shared folder on your host.

Headless Mode

One of the features I liked in VirtualBox is the headless mode.  You can run a vm in the background without any UI elements.  This saves some memory on your host and typically you can run any linux instances in runlevel 3.  Push shift key while clicking on the Start button or use VBoxManage command line tool to start a VM in a headless mode.

Overall I find the performance of VirtualBox better than Fusion for my workload.  I’m also liking the command line tools and programmability via its rich set of APIs.  Tune in for more of my VirtualBox experiments.

Blogo – Updated Blog Editor

I mentioned in my previous post that I’m waiting for a private beta invitation from Blogo, one of the blog editors on OSX. Well, as everyone complains, OSX is missing Windows Live Editor, one of my favourite editors.

I received the beta invitation today and I am posting this via Blogo. These are my initial impressions. Let me state the good parts first:

  • The editor is very polished and minimalistic. I liked the overall layout.
  • Adding my WordPress based blog was a breeze.
  • Preview is decent enough

Now comes my irritations. It is still in the early beta and hence most of these pitfalls will be temporary. Also, remember that I like Windows Live Writer and Windows One Note. Microsoft has best editors possible in their applications.

  • Even the basic formatting is very erratic. Many time I literally reformatted certain sections to maintain the indentation properly
  • Editor is very minimal. All that you can do is basic formatting and lists. No support for indentation, regular blog features like predefined header styles, horizonal rules, tables etc.
  • No image support. I just couldn’t find a way to insert an image though they claim full image editing support
  • Preview requires you to be online. It doesn’t have a way to download your stylesheet and render the page locally. This makes it difficult to author posts offline
  • The editor couldn’t prefetch my tags and categories at times. I had to literally type most times.

What I would like to see?? Oh! its very simple. Give me the Windows Live Writer on OSX 🙂


Organizing a Hackathon

We are trying to organize a hackthon at office in this month. Thinking around this has been going on for a while and finally decided to pick a month where people will be available and release pressure will be moderate and then work backwards to arrive at a schedule.
As usual, we need to be clear about the following while organizing a hackathon.

Agree upon the goals
This is the first and foremost important step. You are asking employees to spend one or more days to work on something they are passionate about by leaving their regular work. You will also have to articulate the goals very well to get support from the management. Employees planning to participate in the hackathon should know what they should achieve in these days and what they can demonstrate. I called it as their acceptance criteria.

Pick a Date
Many teams will have releases scheduled continuously. Its important to choose a convenient date to ensure maximum participation. As always more the merrier. We looked at various release schedules, local and remote holiday schedules and other local events such as school holidays, community events etc. before choosing a date.
Once the date is chosen, work backwards that helps you organize logistics such as:

  • Organizing team needs time to review and finalize proposals, organize T-shirts and other goodies.
  • Pick a demo day. It could be on the last day of the hackathon or the next day depending on convenience and number of participants
  • Provide enough time to your admin and IT teams to organize logistics such as hacker’s dens, networking and power infrastructure and lots of food and caffeine.

Market the EventHackfest-Flayer1
Though this is an office event, it is important to market the event sufficiently to build the momentum. We have been sending frequent flyers, pasting posters and encouraging managers and other senior members of the teams to help their team members come up with good ideas. We are also planning short videos that can be streamed to various monitors in the common area. Authors will talk about their ideas and how it adds values to our customers.

Event Rule
Participants should be well aware of the selection criteria. Though the genenal practice is to encourage everyone to participate (its a community event), there has to be some rules laid down for judging to be effective. There will be people participating for the spirit of the event and others who are serious about what their contributions are. Most probably the juding criteria will include:

  • Imapct to our customers
  • Innovation
  • Achievements


Next important step is the schedule. This includes detailed schedule for the hacking days and other milestones that help you to start with the hacking. This has to be clear to everyone to manage the logistics and also to maintain the sanctity of the event.

Our hackathon is planned for 20th and 21st of this month. Watch this space for more details of the event.