Research Talk



RESEARCH KEYNOTE SERIES


Prof. Eric Paulos 

(University of California, Berkeley)

Bio: Dr. Eric Paulos is Director of the Hybrid Ecologies Lab, an Associate Professor in Electrical Engineering Computer Sciences at UC Berkeley, Director of the CITRIS Invention Lab, Chief Learning Officer for the Jacobs Institute for Design Innovation, Co-Director of the Swarm Lab, and faculty within the Berkeley Center for New Media (BCNM). Previously, Eric held the Cooper-Siegel Associate Professor Chair in the School of Computer Science at Carnegie Mellon University in the Human-Computer Interaction Institute and earlier was a Senior Research Scientist at Intel Research. His research interests include cosmetic computing, critical making, citizen science, urban computing, telerobotics, and new media. Eric received his PhD in Electrical Engineering and Computer Science from UC Berkeley but his real apprenticeship was earned through three decades of explosive, excruciatingly loud, and quasi-legal activities with a band of misfits at Survival Research Laboratories.

Title of the talk: Plastic Dynamism:  From Disobedient Objects to Poetic Wearables

Abstract: This talk will present and critique a new body of evolving collaborative work at the intersection of art, computer science, and design research. It will present an argument for hybrid materials, methods, and artifacts as strategic tools for insight and innovation within computing culture. The narrative will explore work across two primary themes – Emancipation Fabrication and Cosmetic Computing. 
Cosmetic Computing is a new vision for wearable technologies that is a catalyst towards an open, playful, and creative expression of individuality. It's a liberation call across gender, race, and body types. Leveraging the term "cosmetics", originally meaning "technique of dress", we envision how intentionally designed new-wearables, specifically those that integrate with fashionable materials and overlays applied directly atop the skin or body, can (and should) empower individuals towards novel explorations of body and self expression. Unlike many modern traditional cosmetics that are culturally laden with prescriptive social norms of required usage that are restrictive, sexually binary, and oppressive, we desire a new attitude and creative engagement with wearable technologies that can empower individuals with a more personal, playful, performative, and meaningful "technique of dress" — Cosmetic Computing.


Prof. Kevin Leyton-Brown

(University of British Columbia, Canada)

Bio: Dr. Brown is a professor of Computer Science at the University of British Columbia and an associate member of the Vancouver School of Economics. He holds a PhD and M.Sc. from Stanford University (2003; 2001) and a B.Sc. from McMaster University (1998). He studies the intersection of computer science and microeconomics, addressing computational problems in economic contexts and incentive issues in multiagent systems. He also applies machine learning to various problems in artificial intelligence, notably the automated design and analysis of algorithms for solving hard computational problems.

Title of the talk:  Economics and Computer Science of a Radio Spectrum Reallocation

Abstract:  Over 13 months in 2016--17 the US Federal Communications Commission conducted an "incentive auction" to repurpose radio spectrum from broadcast television to wireless internet. In the end, the auction yielded $19.8 billion USD, $10.05 billion USD of which was paid to 175 broadcasters for voluntarily relinquishing their licenses across 14 UHF channels. Stations that continued broadcasting were assigned potentially new channels to fit as densely as possible into the channels that remained. The government netted more than $7 billion USD (used to pay down the national debt) after covering costs (including retuning). A crucial element of the auction design was the construction of a solver, dubbed SATFC, that determined whether sets of stations could be "repacked" in this way; it needed to run every time a station was given a price quote.

This talk describes the design of both the auction and of SATFC. Compared to typical market design settings, the auction design was particularly unconstrained, with flexibility in the definitions of participants' property rights, the goods to be traded, their quantities, and the outcomes the market should seek to achieve. Computational tractability was also a first-order concern. The design of SATFC was achieved via a data-driven, highly parametric, and computationally intensive approach we dub "deep optimization". More specifically, to build SATFC we designed software that could pair both complete and local-search SAT-encoded feasibility checking with a wide range of domain-specific techniques, such as constraint graph decomposition and novel caching mechanisms that allow for reuse of partial solutions from related, solved problems. We then used automatic algorithm configuration techniques to construct a portfolio of eight complementary algorithms to be run in parallel, aiming to achieve good performance on instances that arose in proprietary auction simulations.

Experiments on realistic problems showed that within the short time budget required in practice, SATFC solved more than 96% of the problems it encountered. Furthermore, simulations showed that the incentive auction paired with SATFC produced nearly optimal allocations in a restricted setting and achieved substantially better economic outcomes than other alternatives at national scale.


Prof. Cristina Conati

(University of British Columbia, Canada)

Bio: Dr. Conati is a Professor of Computer Science at the University of British Columbia, Vancouver, Canada. She received a M.Sc. in Computer Science at the University of Milan, as well as a M.Sc. and Ph.D. in Intelligent Systems at the University of Pittsburgh. Conati’s research is at the intersection of Artificial Intelligence (AI), Human Computer Interaction (HCI) and Cognitive Science, with the goal to create intelligent interactive systems that can capture relevant user’s properties (states, skills, needs) and personalize the interaction accordingly. Her areas of interest include User Modeling, Affective Computing, Intelligent Virtual Agents, and Intelligent Tutoring Systems. Conati has over 100 peer-reviewed publications in these fields, and her research has received awards from a variety of venues, including UMUAI, the Journal of User Modeling and User Adapted Interaction (2002), the ACM International Conference on Intelligent User Interfaces (IUI 2007), the International Conference of User Modeling, Adaptation and Personalization (UMAP 2013, 2014), TiiS, ACM Transactions on Intelligent Interactive Systems (2014), and the International Conference on Intelligent Virtual Agents (IVA 2016).
Dr. Conati is an associate editor for UMUAI, ACM TiiS, IEEE Transactions on Affective Computing, and the Journal of Artificial Intelligence in Education. She served as President of AAAC, (Association for the Advancement of Affective Computing), as well as Program or Conference Chair for several international conferences including UMAP, ACM IUI, and AI in Education. She is a member of the Executive Committee of AAAI (Association for the Advancement of Artificial Intelligence).

Title of the talk:  Toward User-Adaptive Visualizations

Abstract: User-adaptive interaction (UAI), a field at the intersection of artificial intelligence (AI) and human-computer interaction (HCI), aims to create intelligent interactive systems that provide users with a personalized interaction experience by modeling and adapting in real-time to relevant users' needs and abilities. The benefits of UAI have been shown for a variety of tasks and applications. In this talk I will describe a new research thread in UAI: user-adaptive visualizations.
   Infovis is becoming increasingly important given the continuous growth of applications that allow users to view and manipulate complex data, not only in professional settings, but also for personal usage. To date, visualizations are typically designed based on the type of tasks and data to be handled, without taking into account user differences. However, there is mounting evidence that visualization effectiveness depends on a user’s specific preferences, abilities, states, and even personality.
   These findings have triggered research on  user-adaptive visualizations, i.e.,,visualizations that can track and adapt to relevant user characteristics and specific needs. In this talk, I will present results on which user differences can impact visualization processing and on  how these differences can be captured using predictive machine learning models based on eye-tracking data.
I will also discuss how  to leverage these models to provide personalized support that can improve the user's experience with a visualization.


Prof. Jenq-Neng Hwang

(University of Washington, USA)

Bio:  Dr. Jenq-Neng Hwang received the BS and MS degrees, both in electrical engineering from the National Taiwan University, Taipei, Taiwan, in 1981 and 1983 separately. He then received his Ph.D. degree from the University of Southern California. In the summer of 1989, Dr. Hwang joined the Department of Electrical and Computer Engineering (ECE) of the University of Washington in Seattle, where he has been promoted to Full Professor since 1999. He served as the Associate Chair for Research from 2003 to 2005, and from 2011-2015. He is currently the Associate Chair for Global Affairs and International Development in the ECE Department. He is the founder and co-director of the Information Processing Lab., which has won several AI City Challenges awards in the past years. He has written more than 350 journals, conference papers and book chapters in the areas of machine learning, multimedia signal processing, and multimedia system integration and networking, including an authored textbook on " Multimedia Networking: from Theory to Practice" published by Cambridge University Press. Dr. Hwang has close working relationship with the industry on multimedia signal processing and multimedia networking.
Dr. Hwang received the 1995 IEEE Signal Processing Society's Best Journal Paper Award. He is a founding member of Multimedia Signal Processing Technical Committee of IEEE Signal Processing Society and was the Society's representative to IEEE Neural Network Council from 1996 to 2000. He is currently a member of Multimedia Technical Committee (MMTC) of IEEE Communication Society and also a member of Multimedia Signal Processing Technical Committee (MMSP TC) of IEEE Signal Processing Society. He served as associate editors for IEEE T-SP, T-NN and T-CSVT, T-IP and Signal Processing Magazine (SPM). He is currently on the editorial board of ZTE Communications, ETRI, IJDMB and JSPS journals. He served as the Program Co-Chair of IEEE ICME 2016 and was the Program Co-Chairs of ICASSP 1998 and ISCAS 2009. Dr. Hwang is a fellow of IEEE since 2001.

Title of the talk: Electronic Visual Monitoring for the Smart Ocean

Abstract: With the increasing incorporation of cameras for fishery applications, such as underwater fish survey based on bottom /midwater trawls and/or ROVs, as well as electronic monitoring (EM) for catch accounting and/or compliance with catch retention requirements. Moreover, they can also enable a non-extractive and non- lethal approach to fisheries surveys and abundance estimation. The camera-based monitoring and sampling approaches not only can conserve depleted fish stocks but also provides an effective way to analyze a greater diversity of marine animals and environmental assessment. This approach, however, generates vast amounts of image/video data very rapidly, effective machine learning techniques to handle these big visual data are thus critically required to make such monitoring and sampling approaches practical. Thanks to many advanced deep learning and computer vision techniques, along with the help of powerful computing resources, many of these tasks can be reliably and real-time performed, a big step toward the smart ocean once these monitoring systems are deployed on every fishing vessel and real-time collecting/analyzing data anytime and anywhere on the ocean. In this talk, I will report some progresses jointly made with NOAA to develop a live fish counting, catch event detection, length measurement and species recognition system, based on the data collected using the Camtrawl, chute or rail camera systems.


Prof. Ryan C.N. D'Arcy

(Simon Fraser University, Canada)

Bio: Dr. Ryan C.N. D'Arcy is the co-founder and senior scientist/entrepreneur for Health Tech Connex Inc. Trained in neuroscience and medical imaging, Dr. D’Arcy holds a BC Leadership Chair in Medical Technology and is full Professor at Simon Fraser University.
He also serves as Head of Health Sciences and Innovation at Fraser Health’s Surrey Memorial Hospital and is widely recognized for founding Innovation Boulevard. Dr. D’Arcy received a B.Sc. (with distinction) from the University of Victoria along with both M.Sc. and Ph.D. degrees in neuroscience from Dalhousie University.
He did post-doctoral training in medical imaging at the National Research Council’s Institute for Biodiagnostics and spent over a decade leading the development of Atlantic Canada’s biomedical imaging cluster. He has extensive experience in translational neuro-imaging, has been the driving force in taking several biotechnology products to market.

Title of the talk: Do you know how your brain is doing? We didn't so we undertook technological development of the world's first brain vital sign framework

Abstract: Vital signs have been critical to improving health care across the globe. In brain care, no such concept existed. Given the basic axiom - You can't treat what you can't measure - the development of brain vital signs addresses a critical gap. This talk will provide an overview of the technological delivery of the world's first brain vital signs along with initial clinical applications in brain injury, aging, and dementia. It will showcase the nearly 25 years of studies to identify key electroencephalography (EEG) responses and the rapid push to translate this science into a point-of-care technology. The technology is a medical grade, deployable, automated, rapid, and easy to use brain vital sign monitor currently rolling out for clinical and research use across North America.  


Prof. Joseph G. Peters

(Simon Fraser University, Canada)

Bio: Dr. Joseph Peters is a Professor of Computing Science at Simon Fraser University, Burnaby, Canada. He received a B.Math. from the University of Waterloo, and M.Sc. and Ph.D. degrees in Computer Science from the University of Toronto. His main research interests are in the areas of communication networks, and multimedia networking with an emphasis on the use of algorithmic techniques to enhance performance. He has published more than 70 peer-reviewed articles on these topics with funding from INRIA and CNRS in France, NATO, and Strategic Grants from NSERC and the B.C. Innovation Council.

Title of the talk: Enhancing Multimedia Performance with Algorithmic Techniques

Abstract: It is well known that advances in battery technology for mobile devices have not kept pace with advances in memory, graphics, and processing power. At the same time, increasingly complex video codecs provide better compression but also increase the complexity of decoding which translates into increased power consumption in mobile devices. One of the main challenges in mobile computing is to develop video systems that can play for longer times given the battery limitations of mobile devices.

 In this presentation, I will describe a project to develop encoding methods that maximize video quality while respecting the physical limitations of different mobile receivers. The resulting system computes the encodings in near-real time (a fraction of a second per frame in software on a commodity PC). The decodings on the mobile devices are also real-time. Almost all of the efficiency of the system is achieved by adapting, specializing, and combining standard algorithmic techniques. Using algorithmic techniques to solve systems-level engineering problems is the main focus of this presentation.


Prof. Raouf Boutaba

(University of Waterloo, Canada)

Bio: Dr. Raouf Boutaba is a University Chair Professor of Computer Science at the University of Waterloo. He also holds an INRIA International Chair in France. He is the founding Editor in Chief of the IEEE Transactions on Network and Service Management (2007-2010), and the current Editor-in-Chief of the IEEE Journal on Selected Areas in Communications (JSAC). He served as the general or technical program chair for a number of international conferences including IM, NOMS and CNSM. His research interests are in the areas of network and service management. He has published extensively in these areas and received several journal and conference Best Paper Awards such as the IEEE 2008 Fred W. Ellersick Prize Paper Award. He also received other recognitions, including the Premier's Research Excellence Award, Industry research excellence Awards, fellowships of the Faculty of Mathematics, of the David R. Cheriton School of Computer Science and several outstanding performance awards at the University of Waterloo. He has also received the IEEE Communications Society Hal Sobol Award and the IFIP Silver Core in 2007, the IEEE Communications Society Joe LociCero and the Dan Stokesbury awards in 2009, the Salah Aidarous award in 2012, the IEEE Canada McNaugthon Gold Medal in 2014, the Technical Achievement Award of the IEEE Technical Committee on Information Infrastructure and Networking as well as the Donald W. McLellan Meritorious Service Award in 2016. He served as a distinguished lecturer for the IEEE Computer and Communications Societies. He is fellow of the IEEE, a fellow of the Engineering Institute of Canada and a fellow of the Canadian Academy of Engineering.

Title of the talk: The “Cloud” to “Things” Continuum

Abstract: Few years ago, we introduced the concept of a multi-tier cloud as part of the “Smart Applications on Virtualized Infrastructure (SAVI)” NSERC Strategic Network Project. SAVI extends the traditional cloud computing environment into a two-tier cloud including smart edges – small to moderate size data centers located close to the end-users (e.g., service provider premises), and massive scale data centers with abundant high-performance computing resources typically located in remote areas. We designed the smart edge as a converged infrastructure that uses virtualization, cloud computing and network softwarization principles to support multiple network protocols, customizable network services, and high- bandwidth low latency applications. Since then the concept of a multi-tier cloud has been widely adopted by telecom operators and in initiatives such as the Mobile Edge Computing (MEC). In the meantime, the advent of the Internet of Things (IoT) has seen an explosive growth in the number of connected devices generating a large variety of data in high volumes at high velocities. The unique set of requirements posed by the IoT data demands innovation in the information infrastructure with the objective of minimizing latency and conserving bandwidth resources. The multi-tier cloud computing model proposed in SAVI falls short in addressing the needs of the IoT applications, since, most voluminous, heterogeneous and short-lived data will have to be processed and analyzed closer to IoT devices generating the data. Therefore, it is imperative that the future information infrastructure should incorporate more tiers (e.g., IoT gateways, customer premise equipments) into the multi-tier cloud to enable true at-scale end-to-end application orchestration. In this keynote, we will discuss the research challenges in realizing the future information infrastructure that should be massively distributed to achieve scalability; highly interoperable for seamless interaction between different enabling technologies; highly flexible for collecting, fusing, mining, and processing IoT data; and easily programmable for service orchestration and application-enablement.


Prof. Shai Ben-David 

(University of Waterloo, Canada)

Bio: Dr. Shai Ben-David earned his PhD in mathematics from the Hebrew University in Jerusalem and has been a professor of computer science at the Technion (Israel Institute of Technology).  Over the years, he has held visiting faculty positions at the Australian National University, Cornell University, ETH Zurich, TTI Chicago and the Simons institute at Berkeley. Since 2004 Shai is a professor at the David Cheriton school of computer science at University of Waterloo. He has also been a program committee chair for the major machine learning theory conferences (COLT and ALT) and an area chair in all major ML conferences (NIPS, ICML and  AISTATS). 

Shai’s research interests span a range of topics in computer science theory including logic, theory of distributed computation and complexity theory. In recent years his focus turned to machine learning theory. Among his notable contribution in that field are pioneering steps in the analysis of domain adaptation, learnability of real valued functions, and change detection in streaming data.

In the domain of unsupervised learning Shai has made fundamental contributions to the theory of clustering and developing tools for guiding users in picking algorithms to match their domain needs. He has also published seminal works on average case complexity, competitive analysis and alternatives to worst-case complexity.

Title of the talk: Unsupervised learning; what can, what cannot and what should not be done.

Abstract:  Unsupervised learning refers to the process of finding patterns and drawing conclusions from raw data (in contrast to supervised learning, where the training data is labeled, or scored, and the learner is expected to figure out a labeling/scoring  rule for use in yet-unseen examples). Unlabeled data is, naturally, more readily available than supervised examples, and there is therefore much to gain from being able to utilize such data. However, our understanding on unsupervised learning is much less satisfactory than the established theory of supervised learning.

In this talk I will discuss several aspects of the theory of unsupervised learning and describe some recent results and insights, as well as provide my idiosyncratic advice about how the research and practice of this important task 
should (and should not) be carried out.
 
In particular, I will highlight joint work with Hasan Ashiani, Nick Harvey, Chris Law, Abas Merhabian and Yniv Plan that has won Best Paper Award in last year's NeurIPS and work with Shay Moran, Pavel Hrubes, Amir Shpilka and Amir Yehudayoff  that was featured last January in Nature Magazine, as well as work with other past and current students of mine.

 



Important Deadlines

Full Paper Submission:15th august 2019
Acceptance Notification: 29th August 2019
Final Paper Submission:30th September 2019
Early Bird Registration: 30th September 2019
Presentation Submission: 1st October 2019
Conference: 17 - 19 October 2019

Previous Conferences

IEEE IEMCON 2017

IEEE IEMCON 2018

Search

 

Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages
Filter by Categories
Uncategorized

 

Announcements

 Conference Proceedings will be submitted for publication at IEEE Xplore Digital Library
• Conference Record No 47333

•Best Paper Award will be given for each track

• There will be two workshops on- i. Data Analysis and ii. IoT Workshop - Concepts to Implementation on 19th October 2019

•There will be one Ethical Hacking Workshop for Beginners.

•Conference Organising committee also invites proposals for Workshop/Tutorials on different fields.

•Conference Organising Committee invites proposals for ‘Call for demonstrations’.Visit the call for papers tab for details.