Wednesday, February 27, 2013

PR2


After a long wait, I was finally introduced to a PR2 today. The day ended on a high but my conversation with the PR2 was still left wanting. The amazing hardware capabilities along with the modular software make it one of the most-easy-to-tame beasts. Perhaps as the week progresses, there would be a chance to look into working end-to-end on a simple solution with the PR2. What is amazing about the PR2 is the array of sensors it boasts which make it very powerful. The grippers with their tactile sensors, which can ensure a soft touch along with a tilting laser scanner for a 3D scan along with a base laser scanner and of course the multiple stereo eyes make it an amazing piece of technology with immense scope for applications in all realms. But the question that arises from the first interaction is why does the PR2 collide with cups on the way when it does see them? Hopefully, the end of this problem would be visible before the week ends.

Tuesday, February 26, 2013

The Google Glass Project


One of the latest innovations being tested by google, the google glass is like any other pair of spectacles except for a display on the right eye and a voice based input which takes commands. It would be very interesting to see how reliably the glass takes instructions given difference accents and of course given background noise. A potential application that I see specially for people who keep forgetting the names of acquaintances would be a face recognition, a place recognition among things. I have already seen videos showing routes from GPS, routes nearby restaurants, shooting photos and videos on command. I also wonder about the possibility of gesture recognition in the glass. All in all, I see it as another sci-fi dream bend towards reality but the scope of the product, still not completely clear. I understand the google glass to be a mobile phone accessory with a GPS and a touch pad on the side. Primarily voice based commands. The verge guy had an outing with the glass at google HQ and has a post on his experience which is the closest I could get to knowing more. Here is the link. He claims it is a question of when rather than if and that is perhaps the most powerful way to close the debate.

Sunday, February 24, 2013

Motion capture with xtion pro





Oooh!! Plug the Xtion capture into the USB and cool effects seem automatic, Here is the first thing that I did after figuring out that there are so many cool things that people have developed already. Ran them to see the results. So, it is blender, PCL, OSceleton and the kinectblendertemplate for this show.


Xtion Pro hand tracking

Saturday, February 23, 2013

Balancing a binary search tree

This is a problem and a solution that I thought of while reading about binary search trees which have the property that each node has at most two children and all the nodes in the left sub-tree of a node are always less than or equal to the considered node and the ones in the right subtree are always greater than the considered node.

The problem basically arises when the tree is unbalanced. The search times in worst cases go by the order of the height of the tree and a badly balanced tree can cause bad search times. The question primarily asks, given a BST, how can we give a balanced version of that BST?


How did I approach the question?

I first felt that we needed a in-order traversal of the tree. An in-order traversal gives the nodes in increasing order. Obviously, it would be great if the median can be the new root. For each subtree, using the in-order traversal, we can recalculate a balanced version recursively.

The primary recurrence uses as input the nodes of the tree put into an array in in-order. For a subtree from index i to j, the root would be the mid of i and j and it's left subtree would come from the same recurrence from i to mid and the right subtree from mid + q to j.

Here is the method:

node balance(int min, int max,node[] arr) {
if(min == max)
return arr[min];
int mid = (min + max )/2;
node curr = arr[mid];
curr.left = balance(min, mid-1, arr);
curr.right = balance(mid + 1, max, arr);
return curr;
}

Here is the git link.

My opinion on Coursera Offerings I have taken

My interest was held intensely by courses on algorithms. Both courses are being(have been) conducted in two parts, one version(Stanford) by Tim Roughgarden, who in my opinion primarily showcases the theoretical background necessary required for dealing with new problems by filling our quiver with a set of tools. The other version(Princeton) by Robert Sedgewick and Kevin Wayne deals with the more realistic problems of understanding practical applications and how to use pre-defined API and how to estimate the complexity of a written code in more realistic terms than our big-O notation. All in all, I have had a very good experience with the Tim's courses and now I am onto Sedgewick whenever time permits.

There is also a guitar and a songwriting course starting up and now that I think of it the bathroom singer needs to improve!!

Not much updates with my iRobot Create and Xtion pro, but the next acquisition(adapter) will bring about some interesting enhancements.

Saturday, February 16, 2013

Another DP problem

AvoidRoads

Another fun albeit a more straightforward problem. This one could be solved quite in a straightforward fashion by assuming the recurrence that for the given vertex, a path can only be drawn from two vertices one to the left and one below it. And the sum of both paths to this vertex is the total number of paths to this vertex.
A pictorial representation of my breakdown:
In the above diagram, the red point is the final destination.
 The pink points are the subproblems for the the red one, the paths from where we can reach the red one. For the Pink vertex to the left(x-1,y), I have shown two green points as the subproblems, and for the other pink one(x,y-1), I have shown a brownish color for the subproblems. Because of an overlap, one of the green and brown vertices have been drawn smaller than the others. 

Here is my method as code:


public long numWays( int maxX, int maxY, String[] bad) {
this.maxX = maxX;
this.maxY = maxY;
processBads(bad);
pathArr = new long[maxX +1 ][maxY + 1];
for(int i = 0; i <= maxX; i++)
for(int j = 0; j <= maxY; j++)
pathArr[i][j] = -1;
return pathsRec(maxX, maxY);
}


public long pathsRec(int x, int y) {
if(x <0 || y < 0)
return 0;
if( pathArr[x][y] != -1)
return pathArr[x][y];
if(x > maxX || x < 0 || y > maxY || y < 0)
return 0;
if(x == 0 && y == 0)
return 0;
if(x == 0 && y == 1 && valid(0,0,0,1))
return 1;
if(x == 1 && y == 0 && valid(0,0,1,0))
return 1;
long path1 = 0, path2 = 0;
if( valid(x, y, x-1, y) )
path1 = pathsRec(x -1, y);
if( valid(x, y, x, y-1) )
path2 = pathsRec(x, y-1);
pathArr[x][y] = path1 + path2;
return  (path1 + path2);

}


public boolean valid(int x, int y, int x1, int y1) {
if(x > maxX || x < 0 || y > maxY || y < 0)
return false;
for(int i =0; i< blocked.length; i++) {
boolean cond1 = x == blocked[i][0] && y == blocked[i][1] && x1 == blocked[i][2] && y1 == blocked[i][3];
boolean cond2 = x1 == blocked[i][0] && y1 == blocked[i][1] && x == blocked[i][2] && y == blocked[i][3];
if(cond1 || cond2 ) {
return false;
}
}
return true;
}


public void processBads(String[] bad) {
blocked  = new int[bad.length][4];
for(int i = 0; i < bad.length; i++) {
String[] temp = bad[i].split(" ");
blocked[i][0] = Integer.parseInt(temp[0]);
blocked[i][1] = Integer.parseInt(temp[1]);
blocked[i][2] = Integer.parseInt(temp[2]);
blocked[i][3] = Integer.parseInt(temp[3]);
}

}

Another Dynamic Programming

ZigZag Problem

This problem gives an input array of int asks to find the length of a longest subsequence in which the difference of successive numbers has opposite signs.

The problem took an extraordinary effort to register and find the recurrences and although I must have seen something like this before, breaking into the recurrence took too long. The recurrence I chose finally was :

 for an element at index i in the array, a longest subsequence can be either by having a greater number than element[i] or an element lesser than element[i]. So, there can be two kinds of sub-sequences for each element, one kind for which the element before the current is smaller and the other subsequence in which the element is larger. (Subsequence 1: element[i], the last element is greater than the one before it, Subsequence 2: element[i], the last element is less than the one before it)

The recurrence would turn out to be:   the maximum subsequence of kind 1 for element i would be the maximum subsequence of kind 2 for all elements before the index i and similarly max subsequence of kind 2 would be max subsequence of kind 1 for all elements before the index i. The length of a longest subsequence would hence be the maximum of the length of both kinds of the subsequences.

Here is the method I wrote recursively without bothering for speed.

 
 public int longestZigZagRec(int[] n, int index, boolean pos) {
  if(index==0)
   return 1;
  int posMax = 1,negMax = 1;
  if(pos) {
   for(int i=0;i<index;i++) 
    if(n[i] < n[index])
     posMax = max(longestZigZagRec( n, i, !pos) + 1, posMax);
  }
  else {
   for(int i=0;i<index;i++)
    if(n[i] > n[index])
     negMax = max(longestZigZagRec( n, i, !pos) + 1, negMax);
  }
  
  int temp = pos==true?posMax:negMax;
  //System.out.println(" from index "+ index+ " longest zigzap length is " + temp +" for "+ pos);
  return pos==true?posMax:negMax;

 }

I call the code like so:
  ZigZag z = new ZigZag();
int [] in = {20,10,1,7,4,9,2,5};
System.out.println(z.max(z.longestZigZagRec(in, in.length-1, true),z.longestZigZagRec(in, in.length-1, false)));

Wednesday, February 13, 2013

Hello Depth

That one is from OpenCV, got depth maps from PCL and Nite too. Just ran the sample openn_capture.

PCL , the Point Cloud Library

A clean install using apt-get did not satisfy my needs as I needed the experimental one. So did an install from source and then ran apt-get install and it suggested that some of my libs were redundant and suggested to remove them. 
Did an autoremove with apt-get and now my PCL dependencies have gone. Here is what sufficed:

sudo apt-get install libboost-dev-all libvtk5-dev
and then the same exercise of  sudo ./install.sh in each of OpenNI,Sensor and Nite folders and we are back on good terms with the Xtion pro.

PCL experimental from source asked me to do:
sudo apt-get install libeigen3-dev libflann-dev and vtk was already installed above. So waiting for some kungfu with kinfu
 UPDATE:
Sigh!! Kinfu needs CUDA support!! Cant' run it on my laptop.  Need to setup on desktop and run remotely on laptop.

Help on OpenCV from source: 
http://renzhi.ca/2010/06/09/how-to-install-opencv-on-ubuntu-from-source/
http://varuagdiary.blogspot.in/2011/09/installing-opencv-23-on-ubuntu-1104.html

Monday, February 11, 2013

IRobot Create

Truck loads of limited edition gratitude to Satish and Madhan for delivering what should be the beginning of a great experience. All it lacks for a little unboxing + first run show is a 200-110V converter. Here is a sneak peak of the goods:



Also, this marks the 200th post on this blog. A fitting one indeed!!

Time to go Xtion Pro

Satish and Asus have brought together an Asus Xtion Pro live to me finally. First hours of experiments include hooking it up with OpenNI gods to get depth maps. PCL comes next and it has quite a depth of applications. I will have to go through some tutorials to get some hands on experience with 3D at home. Yay!

A smiley ball for a size estimate and my 'current' reads behind the sensor.

Thursday, February 07, 2013

Demolished one more in the bucket list


Plumbing
Waited a week for a plumber to turn up and fix a leaking tap. He replaced what he called a 'gadda' and one more week later, we were back to leak stage one. A few more days and advanced leak stage has set in. So, Plumberaj jumped in with a pipe wrench. Unscrewed the old tap, screwed in the new one and then did
skills++

Tuesday, February 05, 2013

C Programming

Declaration and Definition: subtle differences.

I was reading up on a little difference between words that appear to be the same at first glance. The point to note would perhaps be of interest only to the developer. A declaration is said to have been done when a variable has been told to have a certain data type. A definition is when a variable is given a certain memory to put a value in.

These words came up in my reading of geeksforgeeks.org which had a nice article about the extern keyword. Little surprise that extern is available by default for functions. Also, little surprise that extern is not available by default for variables. Extern primarily means a declaration and no allocation of memory. So, if extern were default then variables would never be given space.

However C declaration of vars with extern with values are not treated as errors.
eg- extern int i=0; 

The single source shortest path problem and some background

The Single Source Shortest Path (sssp) problem asks about finding the shortest path to all vertices from a given source vertex in a graph. So, how is it different from our Dijkstra's algorithm? Well, this graph is allowed to have negative edges. Why can Dijkstra's fail in the presence of negative edges? I guess we need to first start with Dijkstra's to understand it's shortcomings. So, a little about what Dijkstra does first.

Hey! No need, it does exactly the same thing sssp. So, now why do we need this? I guess we will need to code this down to talk more about it.

Here goes Dijkstra's .Oh!Wait, We will need a priority queue for Dijkstra's. So, Let's start with a priority queue- A min-Heap.

A min-Heap is a binary tree with the property that the parent is always less than the children.

Before we reach a heap, let us try to do this with our understanding of primitive data structures only. Let us first try to do a Dijkstra with no heap.

Little touch up to the concepts of how we make a data structure for a graph first: There are two popular mechanisms: Adjacency matrix and Adjacency list. As the name suggests, adjacency matrix is a matrix. It is a two-dimensional and has the dimensions of the number of the vertices of the graph on each side. Each number in the matrix(say A[i][j]) will denote the weight of an edge between the vertices i and j if such a vertex exists.

The other representation, called an adjacency list, stores the edges as a list and each vertex has an entry to denote the edge numbers which are connected to it.

Which of the two do we choose? As usual, the answer is "choose wisely". A matrix representation takes nxn memory where n is the number of vertices. A matrix representation also taken time O(n) to find all edges incident from/to a vertex. A look up for a cost of an edge between vertices however is an O(1).  An adjacency list on the other hand takes up size proportional to number of edges. Also, an adjacency list can fetch all edges from/to a vertex without having to traverse O(n).

I will discuss about a list representation, it has edges and vertices. Here is a sample implementation of an edge
 
    class edge{
        public edge(int vertex1, int vertex2, int weight2) {
            this.vertices[0] = vertex1;
            this.vertices[1] = vertex2;
            this.weight = weight2;
        }
        int weight;
        int[] vertices = new int[2];
    }


Here is my vertex class, which I have called a node(now I think this is a mistake, vertex should be a subclass of type node):
  import java.util.ArrayList;

class node{

 int number;
 ArrayList<Integer> edges = new ArrayList<Integer>();

  node(int num){
this.number = num;
  }
}


Back to a little about Dijkstra's. I assume that we need to find the SSSP from vertex 0 without loss of generality. I maintain a distance parameter from each vertex which we initialize with a huge number. However, since the distance to vertex 0 from 0 is 0, so cost to vertex zero is set to 0. I use a numbered flag to indicate how many vertices I have processed. There is one main for loop in which the numbered flag is incremented each time. In the loop,  we find the vertex closest to the source which is not processed, relax it(that is considering this new distance, we change the corresponding distances if they are less than the earlier ones).

Here is the code:
public void getDijkstra(){
int processed = 0;

while(processed<vertices.length){
flag = 1;
Arrays.sort(vertices);
int currVertexNum = vertices[processed].vertexNum;
flag = 0;
Arrays.sort(vertices);
for(int e:nodeList[currVertexNum].edges){
int otherEnd = edgeList[e].vertices[0] == currVertexNum? edgeList[e].vertices[1]: edgeList[e].vertices[0];
if(vertices[currVertexNum].dist!=Integer.MAX_VALUE &&vertices[currVertexNum].dist+edgeList[e].weight<vertices[otherEnd].dist)
vertices[otherEnd].dist = vertices[currVertexNum].dist+edgeList[e].weight;
}
processed++;

}


}
Dijkstra's relies on the fact that if we have a set of neighbours from one vertex, the cheapest distance to that neighbour cannot get any cheaper. But, what if we have negative edge costs? Then, for a vertex v, if we have neigbhours w and x, and x is cheaper, then cost from source to x will be cost from source to v and cost from v to x. However, if we have negative edges, we might find a path from v->w->(some more negative edges)->x which might be cheaper than the earlier one. Hence, Dijkstra babu gives up on negative edges.

//more analysis on negative edges to be filled here.

 So, the need for another algorithm for graphs with negative edge costs!!Follow up on bellman needs to be updated and some heapifying stuff too

The bellman ford algorithm , as other DP problems, relies on breaking down the SSSP problem into subproblems. The main recurrence relies on counting the number of edges from the source to a vertex being considered. The maximum shortest path between two vertices would go through all the vertices. Thus, the length of that path would be atmost (vertex count - 1) . The shortest distance from a source to a destination when using k edges can thus either be the shortest distance we get when we consider k-1 edges or it can be the shortest distance from a destination which has an incoming edge to the destination. Using this recurrence, we solve the bellman ford. Little subtle point to note : when using Dijkstra's we look at outgoing edges, when using bellman's we want incoming, so it would be better to change the data structure according to the algorithm.


private int manford(int destination, int maxEdges) {

System.out.println("destination "+destination+" Maxedges "+maxEdges);
if(cost[destination][maxEdges] != Integer.MAX_VALUE || maxEdges==0)
return cost[destination][maxEdges] ;
node dest = nodeList[destination];

int tempMin = manford(destination,maxEdges-1);

for(int e: dest.edges){
int restPlace = edgeList[e].vertices[0] == destination? edgeList[e].vertices[1]: edgeList[e].vertices[0];
int coster = manford(restPlace,maxEdges-1);
if(coster<Integer.MAX_VALUE)
tempMin = tempMin < coster+edgeList[e].weight?tempMin:coster+edgeList[e].weight;
}

cost[destination][maxEdges] = tempMin;
return tempMin;
}


Here is a link to my git repo for the full codes.

Sunday, February 03, 2013

Optimal Binary Search Tree Construction

Let us assume we somehow have the solution to the question in the last post. How can this solution have come from smaller subproblems?

Our understanding of an optimal BST is a tree with minimum average search cost. How do we find a BST with a minimum cost?

My naive understanding would say let us make all the binary trees we can and then find which of them has the minimum average search cost. The interesting thing to observe here are the overlapping subproblems. If we have found the minimum average cost(C_[1,k]) for a certain binary tree which has it's keys from key1,key2...keyk of the original binary tree and now we attempt to build a binary tree rooted at key(k+1) the average search time for the left subtree would be (C_[1,k]) and the average cost for the right subtree would be C_([k+1,rest of tree]). This is what helps form the main recurrence:


Hokay then! So for every subtree in our huge tree we would have to search for an optimal left subtree and an optimal right subtree by using all possible roots.

Let us try to write a recursive method for the same:
int minAvgCost(int start, int end){
if(start>end)
    return 0;
if(start==end)
    return freq[start];
int min = Integer.MAX_VALUE;
int sumOfSubP = sum(start,end);
for(int i=start;i<=end;i++){
     int temp = sumOfSubP+minAvgCost(start,i-1) +minAvgCost(i+1,end);
     min = min>temp?temp:min;
     }
return min;
}
Some Memoization later: int minAvgCost(int start, int end){


if(start>end)
return 0;
if(recArray[start][end]!=-1)
     return recArray[start][end];
if(start==end)
     return freq[start];
 

 int min = Integer.MAX_VALUE;
int sumOfSubP = sum(start,end);


for(int i=start;i<=end;i++){

     int temp = sumOfSubP+minAvgCost(start,i-1) +minAvgCost(i+1,end);
     min = min>temp?temp:min;
     }


recArray[start][end] = min;
return min;
}

Saturday, February 02, 2013

The Optimal Binary Search Tree Construction Problem


Again excerpts and knowledge gained from Tim's lecture on Dynamic Programming, this time I will attempt to elaborate on the problem of optimal binary search tree. A binary search tree is a tree with the property that all keys less than the key of a current selected now would always be available in the left subtree of the node with the additional property that each node can have a maximum of two children. So, what might an optimal binary search tree be? It can be thought of as a BST(binary search tree) with minimal search time. Search time can be thought of as the number of nodes to be traversed to reach the key which is being searched. The question then arises as to how can we know what nodes are going to be searched? This can be understood as gathering statistics or having data about the searches to be made. In some conditions, this constraint can be valid. For example: suppose I have a binary tree which has keys: good(key=1), bad(key=2) and ugly(key=3). (For some unthinkable reason, each word is associated with a key) People search for the word bad generally, say 80% and good and ugly follow with a meek 10% of the time. Let us see how a binary tree can be arranged with these three elements
Tree 1: (10%) people searching for good find it in first hit, (80%) people searching for bad find it in 2 steps and the rest find their ugly in 3 steps. So, an average search time which can be defined as probability of the node * number of nodes to be traversed to reach it. For tree 1: 0.1*1+0.8*2+0.1*3 = 2.0

Tree 2: Good guys still get their word in first hit, but now the uglies go to two. so, 0.1*1+0.8*3+0.1*2=2.7

 Tree 3: 0.8*1+0.1*2+0.1*2=1.2

 Tree 4: 0.1*1+0.8*2+0.1*3=2.0

 Tree 5: 0.1*1+0.1*2+0.8*3=2.7

Yay!! we got an optimal BST. Average search time for tree 3 appears to be minimal. Simple observation might lead to a wrong inference that a greedy algorithm in which highest probability node can be put at the root and simply following greedily might lead us to a solution.

Let's try to prove it otherwise. What if the good guy searches fell to 40, baddies down to 30 and the uglies up to reach the rest, i.e. 30. A simple greedy approach might tempt us to root the tree with good which will give us tree1 or tree2 which have the number: 1.9 (0.4*1+0.3*2+0.3*3) . But hey! Tree 3 has got just (0.3*1+0.4*2+0.3*2) = 1.7

 Greedy Boy don't get to help us here.!! So that is the problem we are dealing with. I will extend this post with an elaborate one as the solution to this problem despite the ample assistance provided by Tim and big bro google took ages to wrap my head around. So, I will attempt a clearer path towards a solution rather than a brain dump.


Please find the detailed solution here