Category Archives: Uncategorized

Towards A Deurbanized Future

Rapid urbanization is a constant theme across most forecasts made by management consultants over the last couple of decades. Projections are often stretched out to 2030 and beyond and there are forecasts of increasing number of mega-cities with populations of 10 million people or more. A variety of trends across work, health, education, energy and food, however, are rapidly paving the way to a very different future.

How We Got Here

In the past, several factors have favored denser and larger urban agglomerations. It was easier to create as well as find jobs in bigger cities. The centralized mass production nature of factories fueled this tendency. Large factories attracted smaller supplier businesses to be set up. This attracted increasing amount of labour specialized in that industry. The same process continued even with a transition to the service economy. Large companies dominated the landscape and bred townships around them.

Bigger and denser population made it more economical to provide better healthcare, education, electricity and other utilities as well. The supply of goods was cheaper as well with supply chains benefiting from the economies of scale. All this was an added attraction for people to migrate to bigger cities, causing a further swell.

The Problem With Mega-Cities

Of course, these high population densities and vertically growing cities came with their attendant problems. Pollution, sanitation and waste disposal are well-known problems for large cities. These are compounded by softer issues like heavy traffic, smog, social isolation. There is increasing evidence of mental health problems associated with urban life. Not to mention the inability to look up to see the stars.

Few people in large cities today would live there if it were not for employment opportunities and greater amenities.

The Reversing Trend

The move away from cities may have already started. New York City apparently saw a net 900,000 people move to other places in the U.S. between 2010 and 2016. This may not be material at this stage. However, at least on the margins, it could point to an increased tendency to move to less crowded areas*.

More fundamentally, though, there are structural changes that are facilitating such a shift. A rapidly increasing number of people are now freelancing. Fully 35% of the US workforce is believed to be freelancing as of 2016. This is projected to grow to 50% by 2020. With an increasing percentage of work being done online, these freelancers in principle are often unconstrained by proximity to their employer. Further, while tele-presence technologies are still not quite perfect, they will inexorably get better. This will further reduce the dependence of physical proximity to the workplace.

Off-grid power systems in the form of roof-top solar installations will reduce dependence on grid connectivity. AI-powered diagnostics and robotic surgery will reduce dependence on proximity to large hospitals. Education is increasingly being delivered online through MOOCs (massive open online courses) and self-learning platforms. And innovations in agriculture could bring self-sufficiency to reasonably sized clusters of people. Even social media, reviled as it is today, allows people to stay connected despite being geographically very far apart.

At this point, none of these technologies are quite refined enough to trigger a wholesale de-urbanization. But the trend is certainly in that direction. We may not go back to village life but the density of our cities will certainly reduce.

I intend to examine this idea in much more depth in the months to come. Do you live in a big city? What do you like about it? If you wanted to move to a smaller city, what would keep you from doing that? I’d love to hear your point of view.

Where are we headed? (Part III)

Originally posted in November 2006.


I finished my last post with an observation that the chaotic changes taking place around us are in fact, driven by three distinct forces – our ability to move information quickly over vast distances (networking), our ability to process tremendous amounts of information quickly (computing) and our ability to store tremendous amounts of information (storage). It is in the interplay of these potent forces that we can understand the bewildering changes taking place around us and where they are leading us. It is also important to understand that the development in these areas is not necessarily pointing in the same direction. We will discuss this as we go along.

Having said that, we must begin naturally from an examination of the state of the art in these areas. Lets start with storage.

In the desktop drives market, Seagate released a 750 GB model based on perpendicular storage technology in April of 2006. Perpendicular storage technology allows much higher amounts of information to be stored on the same disk area thus allowing much higher capacity disks. While these disks will still take some time to become mainstream, 160 GB disks are extremely common these days with 80 GB disks being the entry level across most segments. For recent entrants into the computer-user world, this might not seem like a lot. However, at a time not long ago (a time which even I can remember distinctly even though I’m only 22), 4 GB disks sounded like overkill to most people. Today, 4+ GB DVD disks are available for as low as Rs. 15 – and DVDs are old technology. The latest entrants fighting for supremacy are the Blu-Ray and High-Density DVD disks offering, at the higher end, storage capacities of 50 GBs. And even as these technologies struggle to get to the retail market, scientists around the world are trying to fit more and more data into disks of similar sizes. On the flash memory drive side, the latest entrant is a 16GB pen drive that you can carry around your neck – and this was in March of 2006. These are all products available in the retail end-user segment.

On the server side, the story is even more mind-boggling. Companies like SGI are offering storage capacities in excess of 400 terabytes (a terabyte is 1000 gigabytes) in a single system with access speeds of 2.5GB/s and higher. The internet archive is on its way to creating a machine capable of storing and managing 1 petabyte (which is a thousand thousand gigabytes). To put this in perspective, the entire text within the Library of Congress takes up only about 20 TB of space. However, several organisations around the world have space requirements running into several petabytes. And this requirement will only grow. We’ll come back to this point again later.

Moving to networking. Again, several things are happening in the area of networking. A very visible effort taking place is the process of covering entire cities with wireless network access (for example, this is planned for the city of Pune). The idea is that wherever you are in the city, you should have access to the internet. Ubiquitous networking. And the hope is that this will spread to more and more cities until most populated places on the earth are network enabled. The other process is, of course, the increasing availability of wired broadband access in more and more cities. Broadband penetration is increasing rapidly and more and more people are getting on to the internet. In parallel, optical bandwidths have gone through the roof. In a recent breakthrough, researchers at NTT were able to pump 14 terabits of data in a second over a distance of 160 kilometers. With a still-in-research technology called “All Optical Networks”, these speeds are set to be dwarfed. We’ll discuss the possibilities opened up by such speeds in another post.

The story on the computing side is also very exciting. 4 GHz processors are common on desktop PCs today. This is almost 400 times faster than an average PC only 12 years ago (when I got my first one). 3D gaming freaks are pushing even this to its limits and it is almost necessary to have high-performance graphics cards to play the more and more closer-to-life 3D simulation games. The sheer computing power available to an ordinary user today is mind-boggling by even very recent standards. As if PCs were not enough, mobile devices are pushing the computing envelope very rapidly. The Nokia N-Gage phone sports a 104 MHz processor in a package weighing a total of 5 ounces (140 grams) and the Palm Treo 700p flaunts 312 MHz of processing power in a 111x58x23 mm package. Many of us reading this probably remember owning and using PCs (the famed Intel MMX 200 MHz machine) with less computing power. At the higher end of the spectrum, companies around the world are trying to smash through the 1 petaflop barrier. With brains like that of Dr. Narendra Karmarkar (creator of the famous Karmarkar’s algorithm for optimization problems) at work on this problem at Computational Research Laboratories (CRL), such a machine may be available sooner than we expect, opening up tremendous opprotunities for research into biotechnology, cosmology, defence and a plethora of other fields. Even companies like Google, Microsoft, etc are building up huge computational capabilities to support the paradigm of software as a service. In this way, compute power is increasing at all ends – the mobile and embedded devices, PCs as well as compute-clusters available with corporates.

This increase in capabilities at both ends of the spectrum seem counter-intuitive to many. On the one hand, SaaS (software as a service) is gaining currency and it seems that all heavy-duty data processing will take place on centralized servers, thus obviating the need for high-performance systems at the user-end. On the other hand, more and more applications are being pushed onto mobile devices raising the requirements of low-power high-performance computing in small packages. In this tussle, what is the fate of software and the end-user? What will happen to the PC? Will my refrigerator actually speak to the supermarket? I’ll try to examine these questions in posts to come.

Where are we headed? (Part II)

This post originally appeared in October, 2006


In the last post, I seem to have given all the credit to wireless communication, saying that it led to the ultimate speedup. Yet I contend that computers are a key ingredient to what we are observing today. And here I will explain why I think so.

Wile the invention of wireless communication (and indeed, electronic communication in general) was indeed a quantum leap, it speeded up only part of the process of change. Change is the result of two very distinct activities. One is, of course, the dissemination of ideas/information. Clearly, however, there is the creation of ideas/information that is a critical activity. Speed-of-light communication allowed the quick transportation of information. However, now the bottleneck in the process of change shifted from transport of information to the generation of information. Initially, this is where computers created the greatest impact.

Computers were mammoth machines crunching numbers at phenomenal speed. Suddenly it became possible to make sense of a much larger amount of information. This naturally led to a greater speed of generating ideas. These ideas could be related to particle physics or national demographics. Calculations which would take days could now be done in a matter of hours, allowing new ideas to be tested much more quickly. Printing presses and electronic communication allowed data and research papers to be sent around the world at tremendous speeds while computers allowed the validation and generation of new ideas. Part of this research fed back into developing better and faster computers, better storage technologies and reliable communication technologies. Now both components of change were operating at superhuman speeds. The plan was set. What we are observing today is this result of an interplay of this tremendous computing power and communication speed developed by man over the millennia in a series of steps that would seem only natural.

At this point I would like to reiterate something that is oft forgotten in any discussion of technology. It is the fact that communication speed-up is a phenomenon quite distinct from the increase in raw computing power, which are in turn quite distinct from the increase in our capacity to store information. It is true that these three forces affect each other dramatically and often depend on each other for their own growth. However, keeping this fact in mind can help us understand much of this apparently chaotic change around us much more easily.

Where are we headed? (Part I)

The following was posted on my personal blog back in October, 2006. I’m re-posting this here as a sort of baseline from which we will build upon, going forward. This was the first of three blog posts, each of which I will post here.


I have this habit of giving “gyaan” to people… to the extent of being called a “Compulsive Gyaan Giver” even by some rather tolerant people. Usually this gyaan can be about any of the numerous aspects of life or the world. One of my favorite subjects is technology. I sometimes end up spending hours talking about the kind of things happening in networking, computer hardware, storage, the web, etc. Recently, I got asked a very pointed question – “You keep talking about all the things that are happening. But do you really know what this will result in eventually?”. There are two problems with this question. One, that there is hardly a notion of an “eventually” in this matter; and second, to really make a guess as to where this will lead to in the next 8-10 years is anybody’s guess. If I were to hazard a guess today and I happen to become an eminent person someday, I will be jibed at for being either very outlandish in my expectations or being too conservative. I’ll take that risk nonetheless; if for nothing else than to make nature want me to be an eminent person. To explain where we are headed today, I’ll start from distant history, borrowing ideas freely from several sources (not all of which I might remember), but especially from two people I consider to be real visionaries, Alvin Toffler and Bill Gates.

People often attribute the rapid change they are witnessing today to some inventions that took place a couple of decades ago. They are partly right. They are right to an extent that computers (and communication networks) helped accelerate the process of change to a speed where it is noticeable by a single generation. This speed-up has led people to pay attention to the process of change and makes them attribute the change itself to computers. (I will write about why computers are not the only method by which this speedup would be possible, but more on that later). The speed-up in change that has occurred in the past few decades has also caused uneasiness resulting in the kind of questions I was asked – “where is all this leading?”.

Toffler explains that the uneasiness because of change in society today is not because of the change itself but the pace of change. He says that the pace of change has effects on people irrespective of the change itself. The kind of change that in earlier ages took place over several generations, with the situation not changing significantly within one lifetime, now happens in 20 years or less. The amount of change that people had to cope with in previous ages was negligible compared to the changes an average individual copes with today. Computer systems become obsolete days after they are purchased, telephone tariffs plummet by the day, airplane fares fluctuate by the minute and new ticket booking mechanisms spring up every few months. However, what Toffler fails to explain is why these changes take place at an increasing pace. I will first make my case for why this change is necessarily going to be accelerated.

All change is driven by information/knowledge. In that sense, the first seeds of change were sown when man developed language. As soon as man found a mechanism for communicating ideas, he had stepped over the first hurdle. Now it was not necessary for every individual to discover a better way of doing things. Once an idea came into being, it was not lost with the individual. It was preserved by being passed on to others of the next generation. This was the foundation stone of the huge edifice of knowledge we stand in awe of today. With only a spoken language, however, which could not overcome the barriers of time and distance effectively, the diffusion of ideas was slow. Of the many ideas that were built on top of the initial ideas passed down by word of mouth was the idea of writing. Writing added a new force to ideas, a new permanence. It added speed to the dispersal of knowledge.

Somewhere in this stream, the wheel was invented. Not only did the wheel provide mobility to men, it added, more importantly, mobility to knowledge. Thus, the wheel, which was a knowledge product helped in the propagation of knowledge itself. With the wheel to cover distance and writing to carry knowledge over time, the basic requirements were established. The first river valley civilizations were born. Ships, a much later invention, allowed man and his ideas, to travel all over the globe. Now, an idea born anywhere on the planet did not need to be confined to that part of the planet. It still took months or years for an idea to travel to other continents, but suddenly the world was one, for the first time in history.

The printing press, as is widely accepted, was the next stepping stone. Now more people had access to knowledge than ever before imagined possible. Knowledge started slipping out of the hands of the few elite in society and started trickling to all its members. Now even more minds were available to build on previous knowledge. The renaissance was a direct result of this access to knowledge. Where there were once 10 minds creating ideas, there were now 10000. The speed of generation of ideas increased and so did the speed of change. At this point, change came up to a pace which was visible to people. And it led to the first revolution (the industrial revolution), when man invented the steam engine and railways. Faster dispersal of ideas resulted, accelerating change. Next came a quantum leap – wireless communication. This single invention speeded up the flow of ideas/information by a factor of several millions. Messages that took days or weeks aboard a ship or a steam engine were now transmitted instantaneously.

An examination of this growth from the invention of a language to the invention of wireless communication will show that the acceleration in the creation of ideas (synonymous with change) is the result of the creation of ideas themselves. Writing, ships, the printing press and wireless communication were creations of the very ideas which they then helped propagate at higher and higher speeds with greater and greater reliability.