Edge Computing Workshop @ ASPLOS 2019

Unlocking the Power of Edge Computing

Workshop at ASPLOS 2019.

Date: April 14, 2019: 8:45 AM to 3:30 PM


Technological forces and novel applications are the drivers that move the needle in systems and networking research, both of which have reached an inflection point. On the technology side, there is a proliferation of sensors in the spaces in which humans live that become more intelligent with each new generation. This opens immense possibilities to harness the potential of inherently distributed multimodal networked sensor platforms (aka Internet of Things – IoT platforms) for societal benefits. On the application side, large-scale situation awareness applications (spanning healthcare, transportation, disaster recovery, and the like) are envisioned to utilize these platforms to convert sensed information into actionable knowledge. The sensors produce data 24/7.

Sending continuous data streams to the cloud for processing is sub-optimal for several reasons. First, often there may not be any actionable knowledge in the data streams (e.g., no action in front of a camera), wasting limited backhaul bandwidth to the core network. Second, there is usually a tight bound on latency between sensing and actuation to ensure timely response for situation awareness. Lastly, there may be other non-technical reasons, including sensitivity for the collected data leaving the locale. Sensor sources themselves are increasingly becoming mobile (e.g., self-driving cars). This suggests that provisioning application components that process sensor streams cannot be statically determined but may have to occur dynamically.  All the above reasons suggest that processing should take place in a geo-distributed manner near the sensors.

Fog/Edge computing envisions extending the utility computing model of the cloud to the edge of the network.

Purpose and Format.

The purpose of this workshop is to bring together academic researchers and industry practitioners to foster an interactive discussion about the state-of-the-art and open research challenges in edge computing across the system stack – from applications down to infrastructure. The workshop will comprise of invited talks by leading academic and industry researchers working on edge-computing applications and infrastructure across Cyber-Physical Systems, Telecommunications, HPC, and Health.

8:45 – 9:00 AM Welcome and Opening Remarks [slides]
9:00 – 9:45 AM Keynote #1: Towards Special-purpose Edge Computing [slides]
Prashant Shenoy (Univ of Massachusetts, Amherst)
9:45 – 10:00 AM Break
10:00 – 10:30 AM

Live Video Analytics – the “killer app” for edge computing! [slides]

Ganesh Ananthanarayanan (Microsoft Research)
10:30- 11:00 AM

Edge computing in the extreme and its applications [slides]

Suman Banerjee (Univ of Wisconsin, Madison)
11:00-11:30 AM Edge-to-cloud computing infrastructure inspired by the emerging needs of Telco applications. [slides]
Kandan Kathirvel (AT&T)
11:30-12:00 PM

Enabling distributed, compute-intensive FaaS on the edge with COMPSs [slides]

Francesc Lordan (Barcelona Supercomputing Center)
12:00-1:30 PM Lunch
1:30-2:30 PM Keynote #2: Computing at the Edge: Sensors, Learning, and Adaptation [slides]
Daniel Reed (University of Utah)
2:30-3:30 PM Panel Discussion


Tushar Krishna
Assistant Professor
Georgia Tech

Kishore Ramachandran
Professor, Georgia Tech

Anish Arora
Ohio State University

Abstracts and Speaker Bios

Title :  Towards Special-purpose Edge Computing

Abstract: In this talk, I will argue that the era of general-purpose computing is rapidly evolving into one of special-purpose computing due to technological advances that allow for inexpensive hardware devices and accelerators to optimize specific classes of application workloads. Edge computing has not been immune to these trends, and it is now feasible to specialize edge deployments for workloads such as machine learning analytics, speech, and augmented reality using low-cost specialized hardware. I will discuss the implications of these technology trends of the future mobile and IOT-based edge applications and present new challenges that will need to be addressed to fully exploit these trends.

Bio: Prashant Shenoy is currently a Professor and Associate Dean in the College of Information and Computer Sciences at the University of Massachusetts Amherst. He received the B.Tech degree in Computer Science and Engineering from the Indian Institute of Technology, Bombay and the M.S and Ph.D degrees in Computer Science from the University of Texas, Austin. His research interests lie in distributed systems and networking, with a recent emphasis on cloud and green computing. He has been the recipient of several best paper awards at leading conferences, including a Sigmetrics Test of Time Award. He serves on editorial boards of the several journals and has served as the program chair of over a dozen ACM and IEEE conferences. He is a fellow of the IEEE and the AAAS and a distinguished member of the ACM.

Title: Live Video Analytics – the “killer app” for edge computing!

Abstract: Video cameras are pervasively deployed for security and smart city scenarios, with millions of them in large cities worldwide. Our position is that a hierarchical architecture of public clouds, private clusters and edges extending all the way down to compute at the cameras is the only viable approach that can meet the strict requirements of live and large-scale video analytics. We believe that cameras represent the most challenging of “things” in Internet-of-Things, and live video analytics may well represent the killer application for edge computing. In this talk, I’ll describe our video analytics platform Rocket that optimizes queries on live videos by carefully selecting their “query plan” – implementations (and their knobs) – and placing them across the hierarchy of clusters to maximize average query accuracy. The system also intelligently uses the live video analytics to generate an index for interactive after-the-fact querying on stored videos.

Bio: Ganesh Ananthanarayanan is a Researcher at Microsoft Research. His research interests are broadly in systems & networking, with recent focus on live video analytics, cloud computing & large scale data analytics systems, and Internet performance. He has published over 30 papers in systems & networking conferences such as USENIX OSDI, ACM SIGCOMM and USENIX NSDI. His work on “Video Analytics for Vision Zero” on analyzing traffic camera feeds won the Institute of Transportation Engineers 2017 Achievement Award as well as the “Safer Cities, Safer People” US Department of Transportation Award. He has collaborated with and shipped technology to Microsoft’s cloud and online products like the Azure Cloud, Cosmos (Microsoft’s big data system) and Skype. He is a member of the ACM Future of Computing Academy. Prior to joining Microsoft Research, he completed his Ph.D. at UC Berkeley in Dec 2013, where he was also a recipient of the UC Berkeley Regents Fellowship.

Title: Edge-to-cloud computing infrastructure inspired by the emerging needs of Telco applications.


Advent of 5G in Telco industry will change the momentum of Telco industry and millions of end users. 5G brings the possibility of virtualizing the Radio Access Network and running in an Edge cloud. Applications that will ride on 5G such as IOT, Augmented reality (AR) and Virtual Reality (VR) will bring additional use cases to host applications at the Edge.

These use cases demand real-time processing and communication between distributed endpoints, creating the need for efficient processing at the network edge using “Edge Computing”.

Edge Computing requires several innovations to make it cost effective for the industry to adopt it. This is not possible without the right contributions from the academic and open source-based efforts.

In this talk, I will share

  1. Dynamics of 5G and use cases of 5G, RAN infrastructure. On-top edge application such as IoT, AR/VR applications.
  2. RAN virtualization, placement in an Edge cloud by running RAN user and control planes separately.
  3. Leading opensource efforts on edge computing such as Linux Foundation Edge, Akraino Edge Stack and O-RAN Alliance.
  4. Edge Computing technology gaps that Academia could help.

Bio: Kandan Kathirvel is Director at AT&T and responsible for AT&T’s Cloud Strategy and Architecture. Kandan is the Technical Steering Committee (TSC) Chair at Akraino Edge Stack and Technical Advisory Council (TAC) at Linux Foundation. Kandan served on the OpenStack board of directors from 2017 to 2018. Kandan leads AT&T’s technology efforts around 5G, Radio Access Network (RAN), Edge Computing, NFV and SDN. Kandan has led several opensource initiatives, and a big Open Source advocate. He has been vocal about the potential of 5G, Edge Computing and frequent speaker on the IEEE, OpenStack, Linux Foundation and other Opensource forums. He has more than 16 years of experience in Telecommunication industry.

Title: Enabling distributed, compute-intensive FaaS on the edge with COMPSs


Computing infrastructure transformations have forced application developers to adopt new execution paradigms. Distributed systems deprecated the traditional monolithic model in favor of service-oriented applications. To exploit the Cloud and offer SaaS, developers embraced the microservices model so that services could adjust the number of instances of each microservice to the current workload. Bringing down the computation from the Cloud to the edge mitigates the network issues – latency and bandwidth – and enables new opportunities. On Fog infrastructures, computing devices can join in or leave at their own will, to deal with such dynamicity microservices became serverless and stateless.

IoT devices have sensors that permanently produce data. These devices can monitor this data themselves and, when a certain condition is met, trigger a response that may require heavy computation capabilities, or they can provide other devices with this information so they process it continuously. To support the former scenario, developers can turn to Function as a Service (FaaS): functions executed on the underlying platform in a serverless, stateless approach. To support the latter, developers need to code using stream processing frameworks such as Kafka streams or Spark Streams.

COMPSs is a task-based programming model that aims to ease the development of applications on the edge by offering a common approach to develop sense-process-actuate, stream-processing and data-analytics applications. By automatically detecting the application’s parallelism and orchestrating the execution of its tasks on the available nodes, it is able to exploit a distributed, heterogeneous and highly-dynamic infrastructure while keeping the code totally unaware of the infrastructure and parallelism details.

Bio: Francesc Lordan obtained the Ph.D. in Computer Architecture from the Universitat Politecnica de Catalunya in 2018 after defending his Ph.D. thesis: “Programming Models for Mobile Environments”. Since 2010, Francesc is part of the Workflows and Distributed Computing group of the Barcelona Supercomputing Center. His efforts have focused on the COMPSs programming model: a task-based model for developing parallel applications running on large distributed infrastructures such as clusters, supercomputers, grids and clouds. Francesc has published more than 20 articles in International conferences and journals, and he has been directly involved in the European projects mF2C, ASCETIC and OPTIMIS. He has also provided support to other collaborative projects such as Venus-C, EU-Brasil OpenBio, Transplant and the Human Brain Project. His research focuses on programming models that aim to ease the development of parallel applications by hiding the technical concerns of heterogeneous and distributed infrastructures.

Title: Computing at the Edge: Sensors, Learning, and Adaptation

Abstract: Big data and deep learning are the memes of the day, as we shift from a world where data was rare, precious, and expensive to one where it is ubiquitous, commonplace, and inexpensive. Massive digital data (from instruments and IoT devices), powerful multilayer classification networks, and inexpensive hardware accelerators are bringing new data-driven approaches, challenging some long held beliefs and illuminating old questions in new ways.  Like any new tool or technology, big data and edge computing challenge and reshape both our social and technical expectations. Likewise, the end of semiconductor Dennard scaling poses new technology challenges in designing domain-specific systems with low power budgets. This talk will examine the challenges of continuum computing, fusing edge sensors and machine learning with large-scale computing and big data analytics when computations must increasingly respond to real-time events. As an example, consider the research and scholarship questions that might be explored via powerful analytics applied to data streaming from thousands of sensors placed on human structures (buildings, public utility poles, automobiles) and the environment (air, water, soil, …).

Bio: Daniel A. Reed is the Senior Vice President for Academic Affairs (Provost) at the University of Utah.  Previously, he was the University Chair in Computational Science and Bioinformatics, and Professor of Computer Science, Electrical and Computer Engineering, and Medicine at the University of Iowa, where he also served as Vice President for Research and Economic Development. Previously, he was Microsoft’s Corporate Vice President for Technology Policy and Extreme Computing, where he helped shape Microsoft’s long-term vision for technology innovations in cloud computing and the company’s policy engagement with governments and institutions worldwide.

Before joining Microsoft, he was the founding director of the Renaissance Computing Institute (RENCI) at the University of North Carolina at Chapel Hill, where he also served as Chancellor’s Eminent Professor and Vice Chancellor for Information Technology. Prior to that, he was Gutgsell Professor and Head of the Department of Computer Science at the University of Illinois at Urbana-Champaign (UIUC), Director of the National Center for Supercomputing Applications (NCSA). He was also one of the principal investigators and chief architect for the NSF TeraGrid.

Dr. Reed has served as a member of the U.S. President’s Council of Advisors on Science and Technology (PCAST) and the the President’s Information Technology Advisory Committee (PITAC). He has served on the National Academies Board on Global Science and Technology, the International Telecommunications Union CTO Council, and the ICANN Generic Names Supporting Organization Council. He is the past chair of the Board of Directors of the Computing Research Association (CRA), which represents PhD-granting computer science departments in North America, and currently serves on its government affairs committee. He is the incoming chair of Section T (Informatics) of the AAAS. He currently chairs the Department of Energy’s Advanced Scientific Computing Advisory Committee (ASCAC), chairs the NAS Panel on Computational Sciences at the Army Research Laboratory, serves on the board of directors for the Institute for Research on Innovation and Science (IRIS), and serves on the National Center for Optical Astronomy Management Oversight Council

Dr. Reed is a Fellow of the ACM, the IEEE and the AAAS. He received his B.S. from the University of Missouri-Rolla and his M.S. and Ph.D. from Purdue University, all in computer science.

The whole is greater than the sum of its parts