In this second of my two part reflection on the internet at its 50th birthday, I turn my eyes toward three challenges the medium has to solve. I then list from a bird’s eye view some of the most important solutions being investigated in academia to fix these. In customary braggadocio, I include some research efforts at Purdue to fix these failings. The hills right around the corner for the internet to scale are:
- How to make it more reliable
- How to make it more friendly to the things in the “Internet of Things”
- How to make it safe for our personal and our intellectual information
Reliable internet
The internet is surprisingly easy to disrupt, at the level of a city or even a country. This is often done with intent—such as, by a government wanting to quash dissent—is sometime the unintentional result of a bungled technology update, and in rare cases, the result of a deliberate non-state actor’s action. It may have been acceptable decades ago when the internet was a tool of convenience for a band of researchers in their ivory towers. But it is unacceptable today when life and liberty depend on the internet, and I use that without hyperbole. Life is surely at stake when the internet is being used to coordinate relief and rescue operation after a natural disaster, by a surgeon to get access to the medical history of someone who is going to go under the scalpel on the operating table, or to dispatch blood supply by drone to the critically injured in places where the road network is poor.
On the matter of making the internet reliable, there are reams of papers written in Computer Science conferences and journals, to the point where an academic venturing into this field is given standard advice that new publications on this topic will take a while to materialize. A small fraction of the academic work is beginning to see adoption. They say “If you cannot measure it, you cannot change it.” Several researchers have taken this to heart in our context and developed smart methods to measure internet-scale outages, that happen say due to internet exchange points being disrupted [ Paper-Sigcomm17 ][ Paper-IMC18 ]. These works build monitoring infrastructure that can locate the epicenter of the outages and track their effects as they ripple outward. Policy makers in some countries are taking notice and mandating greater transparency when internet outages of sufficient magnitude occur.
But much more needs to be done to make the internet an infrastructure with the degree of resilience that it demands as a critical infrastructure piece. There needs to be back-up paths for the traffic to travel through in case of failures, natural or induced, at critical points, like an internet exchange points. This means working with the network providers (the AT&Ts of the world) to put in the right level of redundancy at the right places, a task that many academic researchers have done in countless publications. And there needs to be nation state-level monitoring and root cause diagnosis of outages. This is again not a big hill to climb considering how many usable tools already exist in the open source; they need to be deployed and maintained at the scale of the internet.

Credit: Tom Cheney, New Yorker Cartoons, October 8, 2012.
Scale to connect billions of “things”
Three trends have combined to create the deluge of things in the “Internet of Things” which are numbering in zillions—use your favorite jaw-dropping number unit or according to the latest marketing hype. The first factor is the reduced cost of sensors, in keeping with the reduced cost of microelectronics as predicted by the über-famous Moore’s law. The second is the increased ubiquity of wireless networks. And the third factor is the growth in algorithms that can make use of the data being pumped out by these things.
While we have these zillions of smart things—this must make the makers of things past cringe as the antonym of “smart” is “dumb”—what happens when we connect them up to the internet. We may want to do that if we want to collect the sensed data from a distance, such as, measuring the efficiency of automated irrigation in a dusty farm from the comfort of a plush office. Or if we want to control some process from a distance, such as, changing the running parameters of a set of machines on the factory floor in response to some data analytics that we have run. But we have not dared to connect all of them up to the internet yet. Why? Because we believe, and with good technical reason, that its backbone is not strong enough to handle all the data zipping back and forth among the smart things.
There are three aspects that need work for the above vision to come true. First, the wireless networks must support higher bandwidth, but in a subtle way. There are some streaming applications, like data analytics on streaming video, that demand bandwidths of a few Mb/s (YouTube 720p HD content for example requires 2.5 Mb/s), if all of the raw data has to be ferried back to a data center. There are several other applications, such as importantly, machine-to-machine interaction, which require much lower. For example, a cellular network variant proposed for this space called NB-IoT or LTE-M supports 120 Kb/s. There is work in this space going on in academic circles that resources networks according to the requirement [ Paper-SRDS16 (from our lab)] [ Paper-Sigcomm05 ]. There is another stream of work that is doing some processing of the raw data close to the sensors that are generating the data, and only sending a reduced bandwidth data stream to the backend data centers [ Paper-Mobicom19 ] [ Paper-arXiv19 (from our lab)].
Second, the internet needs to be more customized for machine-to-machine interactions, rather than the quaint notion that everyone (nay, everything) interacting with the internet is doing so at human speeds. There is work in this space on supporting machine-to-machine protocols, such as, by changing the network protocols accordingly [ Paper-IoTJournal15 ] or perhaps more pressingly, some standardization so that my super-smart fridge can talk to my only-moderately-smart wallet [ Paper-IEEECommTutorials17 ].
Third, the protocols must be made much more energy efficient so that the smart things can continue to run for months on end without us hapless humans having to go run about and change batteries. Consider that if we run a standard video analytics neural network on an embedded processor with a GPU, it will get our device with two AA batteries out of juice in 3-5 hours1. There is significant work going on in making the power-hungry neural networks, a little less so. These try various tricks like reducing the depth of the neural networks, reducing the number of edges, or reducing the weights of the edges. [ Paper-arXiv17 ] [ Paper-ICLR16 ]

Credit: Michael Crawford, New Yorker Cartoons, September 27, 1993.
Protecting our personal and intellectual information
This is perhaps the most pressing of the hills the internet needs to climb. First on the personal front, data about us, including sensitive personal information, is out there. This is the fuel that runs the largest internet companies. This is the shape of things as they are and as they will be for the foreseeable future. However, we can mitigate its concerns, while living within the boundaries of this economic reality. For example, we can develop the internet to enable secure sharing of information, with a leash attached to the information. Just like a leash constrains how far my dog can venture out away from me, similarly an application protocol built to work over today’s internet could enable me to put limits on how long this personal information stays and with whom [ Paper-SOUPS16 (from my Purdue colleague, Aniket Kate) ]. In addition to the act of sharing, there is also the need to protect the sensitive data once it is sitting in the coffers of one of these organizations. We have now become blase about the news of data breaches only because they happen with such disappointing regularity. However, it is possible through various forms of encryption and two-factor authentication protocols (well researched in the academic literature) to raise the barrier to such data breaches.
On the matter of intellectual property theft, this affects the broad public less directly and thus we have fewer headlines about it. But ask anyone who has been involved in developing new technology in commercial space, either as an entrepreneur (you can skip the line and just ask me) or in an established company in a senior management role. They will tell you that putting intellectual property even on the intranet (not the internet), behind multiple levels of protection, is akin to putting a “Come hit me” sign on your forehead. And yet, to move our technological creations forward faster in the commercial space, we feel the need to use the intranet or the internet. Today’s digital economy means that IP theft over the internet is easier and the double whammy is that much of the company’s value lies in the digital IP assets.
We can raise the barrier for intellectual property theft through technological means, though policy (and diplomatic) means will continue to play the leading role in mitigating the threat. On the technology front, there is the well-researched branch of secure multi-party computation, which enables us to share secrets, distributing shares among multiple parties. Each party individually can only reveal limited (or no) information or can perform only a restricted set of operations on the data. Only by pooling together the shares of multiple parties can the more powerful operations be done [ Paper-NSPW01 (from my Purdue colleague, Mike Atallah) ] [ Paper-CCS08 ]. Another threat in this space is posed by ransomware, where digital assets containing your IP are encrypted by malicious actors and are only released once you pay a ransom, in digital currency. Technological solutions have been developed here too. One thread of work analyzes the behavior of the ransomware when it starts its malevolent actions and stops it from running before it can do its damage [ Paper ]. Another thread (done in our lab) tricks the ransomware to believe that it has succeeded, while under the covers, it creates backup of the files, so that they are available even after the ransomware is done with its execution [ Paper ].

Credit: Kaamran Hafeez, New Yorker Cartoons, February 23, 2005.
In Conclusion
All things considered, the internet has been transforming the way we live, work, and play, over the last 50 years, and the pace has not flagged. I hope that it will continue to play as impactful a role in the next 50 years. For that I believe, we need to tackle some challenging technological problems, and put them to practice. Three of the most important ones are:
- How to make it reliable and at scale
- How to have it connect the gigazillion “Internet of Things” things and not collapse under their collective weight
- How to ensure the privacy of our personal information and that our hard-earned intellectual victories do not become easy pickings
I know our breed of fearless Computer Scientists are and will continue to tackle these challenging problems till they are tamed.
[1] Using the SqueezeNet neural network architecture,
running at 30 frames per second, on an NVIDIA
Jetson TX2 board.