Urban Tech Workshop Speakers
SPEAKERS
|
Jacob Abernethy (Georgia Institute of Technology)
Jacob Abernethy is an Associate Professor in the College of Computing at Georgia Tech. In October 2011 he finished a PhD in the Division of Computer Science at the University of California at Berkeley, spent two years as a Simons postdoctoral fellow at UPenn, and held a faculty job at the University of Michigan for four years before joining Georgia Tech. Abernethy’s primary interest is in Machine Learning, with a particular focus in sequential decision making, online learning, online algorithms and adversarial learning models. Title: Machine learning and the search for lead pipes Abstract: The Flint water crisis, which began in 2015, emerged as a significant political news story that highlighted the pervasive issue of lead pipes in the United States. This presentation delves into the collaboration between myself and Eric Schwartz (Associate Professor of Marketing, Ross School of Business), as we discovered the challenges behind determining the material of buried water service lines, and the reluctance of stakeholders to address the problem. Utilizing machine learning tools, data synthesis, and innovative inspection targeting strategies, we were able to significantly improve the efficiency of Flint’s pipe replacement program. Out of this effort we started BlueConduit, a software analytics company dedicated to assisting municipalities across the country in their lead abatement efforts. We have had a direct impact on US EPA regulations, and our tools are currently being implemented in over 100 water systems nationwide. This presentation will provide a comprehensive account of this journey, from the statistical and algorithmic methodology, to the political and logistical challenges required to utilize these tools effectively to advance equitable access to safe drinking water. |
|
|
Sara Bronin (Cornell University) & Alexander Rush (Cornell Tech)
Sara Bronin is a Professor at Cornell University and the Director of the Legal Constructs Lab, specializing in property law, land use law, historic preservation, and climate change. She founded and directs the National Zoning Atlas, a project aiming to digitize, demystify, and democratize the country’s 30,000 zoning codes. Among other publications, she is the author of the forthcoming book, Key to the City, on how zoning shapes our lives, drawing in part on her experience leading the nationally-recognized efforts of the City of Hartford to draft and adopt a transformative zoning code. She received a J.D. from Yale Law School, an M.Sc. from the University of Oxford (as a Rhodes Scholar), and a B.Arch. and B.A. from the University of Texas. Currently on public service leave from her Cornell position, she was confirmed by unanimous consent of the U.S. Senate to serve as Chair of the U.S. Advisory Council on Historic Preservation. Alexander “Sasha” Rush is an Associate Professor at Cornell Tech, where he studies natural language processing and machine learning. Sasha received his PhD from MIT supervised by Michael Collins and was a postdoc at Facebook NY under Yann LeCun. His research interest is in deep learning text generation, generative modeling, and structured prediction. His group also supports open-source development, including the OpenNMT machine translation system. His research group has been recognized with an NSF Career Award and a Sloan Fellowship, and has won paper awards at conferences for NLP, hardware, and visualization, as well as awards for best demonstrations for open-source software. He leads the NSF-funded team exploring NLP to accelerate the process of reading zoning codes for inclusion in the National Zoning Atlas. Title: Automating Analysis of Zoning Codes for the National Zoning Atlas Abstract: Zoning laws adopted by around 30,000 local governments across the United States govern nearly everything that gets built in the United States, but they are lengthy, complex documents whose illegibility thwarts sound policymaking. This presentation provides a progress report for an interdisciplinary, NSF-funded project, combining legal analysis and computer science, that aims to accelerate our ability to understand zoning, one of the most important, yet understudied, governmental powers impacting our environment, economy, and society. The legal side of the team is assembling hundreds of zoning codes, manually reviewing them, and incorporating specific regulatory information into the publicly-accessible National Zoning Atlas. The computer science side of the team is developing natural language processing methods to transform these zoning codes to structured data and ensure data accuracy by comparing the outputs of the automated process to the manual results. The compiled data and developed tools can facilitate concrete, actionable insights and unlock secondary research about zoning’s impact on urban housing availability, transportation systems, the environment, economic opportunity, educational opportunity, and our food supply. Moreover, the approach can guide future researchers interested in creating natural language processing models for other types of administrative law texts, including highway safety manuals, building codes, city plans, and more. |
|
Rachel Cummings (Columbia University)
Dr. Rachel Cummings is an Associate Professor of Industrial Engineering and Operations Research and (by courtesy) Computer Science at Columbia University. Before joining Columbia, she was an Assistant Professor of Industrial and Systems Engineering and (by courtesy) Computer Science at the Georgia Institute of Technology. Her work uses an interdisciplinary approach to address the technical, legal, and social challenges of bringing differentially private tools to bear in practice and at scale. She has received numerous awards including an NSF CAREER Award, a DARPA YFA, two doctoral dissertation awards, and Best Paper Awards at DISC, CCS, and SaTML. Dr. Cummings also serves on the ACM U.S. Technology Policy Committee and the Future of Privacy Forum’s Advisory Board. Title: Improving Communication with End Users and Stakeholders about Privacy Guarantees Abstract: Differential privacy (DP) is widely regarded as a gold standard for privacy-preserving computation over users’ data. A key challenge with DP is that its mathematical sophistication makes its privacy guarantees difficult to communicate to users, leaving them uncertain about how and whether they are protected. Despite recent widespread deployment of differential privacy, relatively little is known about what users think of differential privacy and how to effectively communicate the practical privacy guarantees it offers. This talk will cover a series of recent and ongoing user studies aimed at measuring and improving communication with non-technical end users about differential privacy. The first set explores users’ privacy expectations related to differential privacy and measures the efficacy of existing methods for communicating the privacy guarantees of DP systems. We find that users care about the kinds of information leaks against which differential privacy protects and are more willing to share their private information when the risk of these leaks is reduced. Additionally, we find that the ways in which differential privacy is described in-the-wild set users’ privacy expectations haphazardly, which can be misleading depending on the deployment. Motivated by these findings, the second set of user studies develops and evaluates prototype descriptions designed to help end users understand DP guarantees. These descriptions target two important technical details in DP deployments that are often poorly communicated to end users: the privacy parameter epsilon (which governs the level of privacy protections) and the distinctions between the local and central models of DP (which governs who can access exact user data). Based on joint works with Gabriel Kaptchuk, Priyanka Nanayakkara, Elissa Redmiles, Mary Anne Smart, including https://arxiv.org/abs/2110.06452, https://arxiv.org/abs/2303.00738, and other ongoing work.
|
||
Vanessa Frias-Martinez (University of Maryland)
Vanessa Frías-Martínez is an associate professor in the College of Information and UMIACS at the University of Maryland, where she leads the Urban Computing Lab. Her research focuses on the development of computational models to understand and predict human behaviors in the built environment as well as on the design of tools to aid decision makers build more equitable and resilient communities. Frías-Martínez is the recipient of multiple National Science Foundation awards, including the prestigious CAREER award. Prior to academia, she spent five years at Telefonica Research. She received her PhD in Computer Science from Columbia University. Title: Data-driven decision making for resilient and equitable cities Abstract: The pervasiveness of cell phones and mobile applications generates vast amounts of digital traces that can reveal a wide range of human behavior. From mobility patterns to social networks, these signals expose insights about human behaviors in the built environment that could assist decision makers in the design of novel policies. However, due to technology barriers – e.g., data plans can be costly for low-income groups or seniors might not be comfortable with smartphones – certain socio-economic and demographic groups might be under-represented in the digital traces, resulting in biased behavioral insights that could lead to potentially unfair policies. This talk will highlight research that my lab is doing in two areas: (1) the design of computational models to extract behavioral insights from spatio-temporal data and (2) the design of methods to audit and mitigate the impact of data bias on the output of these computational models. I will first discuss computational approaches that can help local governments and non-profit organizations better understand the spatial dynamics of cities and communities – offering additional behavioral insights beyond more traditional sources of information – and assisting them in the design of more accessible, resilient and humane cities. After that, I will present our work on auditing and mitigating the impact of data bias on the computational approaches we develop so as to promote the design of fair and equitable policies across all socio-economic and demographic groups. |
||
Umberto Fugiglando (Senseable City Lab, MIT)
Umberto Fugiglando is a Research Manager at the Senseable City Lab at Massachusetts Institute of Technology (MIT), a multidisciplinary research group that studies the interface between cities, people, and technologies. He has been leading and managing multi-stakeholder research projects on data science applied to urban technology initiatives, and conducted research focusing on human driving behavior and mobility patterns in cities. Moreover, he is in charge of developing and maintaining partnerships between cities, companies and foundations that support the group’s research agenda. Additionally, Umberto serves as an External Expert for the European Commission, working with policy makers on the future of mobility. Umberto’s background is in Applied Mathematics and Engineering, and he has studied in Italy, Sweden, Canada and US. Title: Bits, Bricks, and People: Senseable Cities Abstract: The real-time city is now real! The increasing deployment of sensors and hand-held electronics in recent years is opening a new approach to the study of the built environment. Digital technologies are radically changing the way we understand, design, and ultimately live cities. This is having an impact at different scales – from the single building to the scale of the metropolis. On the occasion of the ML+UrbanTech Workshop, we will address these issues from a critical point of view through projects by the Senseable City Laboratory, a research initiative at MIT. |
||
Nikhil Garg (Cornell Tech)
Nikhil Garg is an Assistant Professor of Operations Research and Information Engineering at Cornell Tech as part of the Jacobs Technion-Cornell Institute. His research interest is the application of algorithms, data science, and mechanism design to the study of democracy, markets, and societal systems at large. He received his PhD from Stanford University and has spent considerable time in industry — most recently, he was the Principal Data Scientist at PredictWise, which provides election analytics for political campaigns. Nikhil received the INFORMS George Dantzig Dissertation Award and an honorable mention for the ACM SIGecom dissertation award. Title: Efficiency and equity design and engineering in resident crowdsourcing Abstract: Modern city governance relies heavily on crowdsourcing to identify problems such as flooding, damaged trees and downed power-lines. Two major concerns are that (1) residents do not report problems at the same rates and (2) agencies respond differentially to reports, leading to an inefficient and inequitable allocation of government resources. However, measuring such under-reporting and differential responses are challenging statistical tasks: ground truth incident and risk distributions may differ by area, and, almost by definition, we do not observe incidents that are not reported. First, we develop a method to identify (heterogeneous) reporting rates, without using external (proxy) ground truth data. We apply our method to over 100,000 resident reports made to the New York City Department of Parks and Recreation, finding that there are substantial spatial and socio-economic disparities in reporting rates, even after controlling for incident characteristics. Second, we develop a method to audit differential response rates, even when incident occurrence varies spatio-temporally and the agency faces capacity constraints. Finally, we show how to design service level agreements in a data-driven manner, to optimize both efficiency and spatial equity.
|
||
Eugenia Giraudy (Meta)
Eugenia Giraudy is a Research Scientist Manager on the Core Data Science team at Meta. She manages a team of researchers dedicated to leveraging big data to help humanitarian organizations, goverments, and academics through the Data for Good program at Meta. Her team’s work focuses on understanding how big data can help organizations on a wide array of topics, such as migration, natural disasters, economic develpment, or climate change.She previously earned her Ph.D. in Political Science from the University of California at Berkeley. Title: Using big data to support humanitarian organizations during major world crises Abstract: Meta’s Data for Good (D4G) program focuses on leveraging social media data and tools to enable humanitarian organizations and NGOs to better respond to natural disasters, public health crises, or climate change. This talk focuses on the methodology behind three of D4G’s most popular datasets. First, the High Resolution Settlement Layer leverages machine learning techniques to process satellite data with the goal of producing the most granular micro-estimates of population. Second, Disaster Maps combine location data and privacy-preserving mechanisms to help disaster response organizations better understand, in near real time, what parts of a city might have been affected the most by hurricanes, earthquakes, cyclones, or floods. Last, the Relative Wealth Index uses machine learning models and novel data sources to produce granular estimates of relative standard of living within low- and mid-income level countries.
|
||
Alexandre Jacquillat (Massachusetts Institute of Technology)
Alexandre Jacquillat is an Assistant Professor of Operations Research and Statistics at the MIT Sloan School of Management. His research focuses on data-driven decision-making, spanning integer optimization, stochastic optimization, and machine learning. His primary focus is on scheduling, operations and pricing in transportation and logistics, with the goal of promoting efficient, reliable and sustainable mobility of people and goods. Alexandre is the recipient of several awards, including the Dantzig Dissertation Award from INFORMS, the Best Paper Prize from INFORMS Transportation Science and Logistics, the Pierskalla Best Paper Award from INFORMS Health Applications, and the Best Paper Award from the INFORMS Workshop on Data Mining and Decision Analytics. Prior to joining MIT, Alexandre was an Assistant Professor at Carnegie Mellon University. He received a Master of Science in Applied Mathematics from the Ecole Polytechnique and PhD in Engineering Systems from MIT. Title: Design and optimization of hybrid microtransit systems Abstract: Urban mobility is being transformed by the emergence of hybrid microtransit systems, which combine advanced planning elements at the core of public transit and flexible operations at the core of on-demand mobility. This talk outlines two of these hybrid systems—paratransit and demand-responsive microtransit—and develops two-stage stochastic optimization methods to tackle surrounding design, planning and operating questions. The first part of this talk focuses on day-ahead itinerary planning in paratransit systems—a reservation-based system subject to uncertainty from trip cancellations and driver no-shows. Using a shareability network representation of routing operations, we formalize the Stochastic Itinerary Planning Problem with Advance Requests (SIPPAR) via two-stage stochastic optimization with a tight recourse model. This formulation, however, involves exponentially many variables and constraints with column-dependent rows. We develop an activated Benders decomposition algorithm that exploits linking relationships between the first-stage and second-stage problems to (i) accelerate the generation of Benders cuts by lifting the solution of a restricted subproblem into global optimality and feasibility cuts; and to (ii) strengthen the Benders cuts with locally Pareto-optimal cuts. Using data from a major paratransit platform, we show that our algorithm scales to real-world instances, outperforming several benchmarks in terms of computational times, solution quality, and solution guarantees. From a practical standpoint, the SIPPAR model mitigates operating costs by strategically adding slack to driver itineraries in order to create flexibility and robustness against operating disruptions. The second part of this talk focuses on demand-responsive microtransit—a hybrid mobility system that relies on reference lines and performs on-demand deviations in response to passenger requests. We formulate a Microtransit Network Design model for Vehicle Routing (MiND-VRP) via two-stage stochastic optimization with a first-stage line planning structure and a second-stage time-space-load network flow structure that exploits a novel subpath representation of microtransit operations between reference stops. We develop a solution algorithm combining Benders decomposition, column generation and a tailored label setting algorithm. Using real-world data from Manhattan, our method scales to large practical instances, with dozens of lines and hundreds of reference stops. Comparisons with transit and ride-sharing offerings suggest that demand-responsive microtransit can provide win-win outcomes toward efficient and sustainable mobility: higher demand coverage, better level of service, and smaller environmental footprint.
|
||
|
Cristiana Lara & Xiaoyan Si (Amazon)
Cristiana Lara is a Senior Research Scientist at Amazon in the Modeling and Optimization group. She has a Ph.D. in Process Systems Engineering from the Department of Chemical Engineering at Carnegie Mellon University. Her research focuses on modelling and solution algorithms for large-scale discrete optimization problems. She has a particular interest in applications related to supply chain and logistics, including timing-aware transportation network design and planning, optimal inventory placement, and integrated financial and operational planning. Cristiana was selected by the National Academy of Engineering as one of the nation’s outstanding early-career engineers invited to attend the 2021 U.S. Frontiers of Engineering symposium. Xiaoyan Si is a Senior Research Scientist at Amazon’s Modeling and Optimization team. The team is responsible for building models and systems that support Amazon’s customer fulfillment network design at scale. Before joining Amazon, Xiaoyan worked in the railroad industry for 10 years on problems ranging from crew scheduling, data analytics, and computer vision. Xiaoyan received her Ph.D. in Operations Research and Industrial Engineering from the University of Texas at Austin. Her research interest is solving large-scale optimization problems encountered in the industry. Title: Re-engineering Amazon’s logistics network to optimize for speed, cost and selection Abstract: Omnichannel retail today requires fulfillment of customer orders through a transportation network that is both low-cost and fast. In addition, to provide access to the retailer’s entire selection (which may include third parties selling through the retailer’s marketplace), the network needs to connect thousands of nodes to end customers, on a continental scale, using large fleets of vehicles. Designing and executing this complex transportation network is challenging and can lead to undesired inefficiencies. We solve this problem by adding structure to the network to encourage flow concentration and consolidation. This is done by decomposing the solution space into different pieces which are loosely coupled, reducing the search space by removing decision variable choices that do not make business sense, and accepting different optimality tolerances for different business decisions.
|
|
Neal Parikh (Columbia University)
Neal Parikh is a computer scientist who most recently served as Director of AI for New York City. He is also currently Adjunct Associate Professor in the School of International & Public Affairs at Columbia University, teaching a new class called “AI: A Survey for Policymakers.” Previously, he co-founded a technology startup, which was acquired after 10 years in operation; was Inaugural Fellow at the Aspen Tech Policy Hub at the Aspen Institute; and worked as a senior quant at Goldman Sachs. He received his Ph.D. in computer science from Stanford University, focusing on large-scale machine learning and convex optimization, and his research has received over 20,000 citations in the literature and is widely used in industry. Title: Reflections on AI in NYC government Abstract: AI and machine learning have emerged as increasingly ubiquitous technologies in a wide range of areas in both the private sector and in government. In the past several years, ethical and other questions around how and whether to use AI for particular tasks have become much more prominent, partly due to its widespread use and partly due to publicly documented failures or shortcomings of a number of systems that can negatively impact people in sometimes serious ways. The speaker recently served as the first Director of Artificial Intelligence for New York City, a then-newly created position in the NYC Mayor’s Office with broad responsibilities relating to policy and legislation, technical advisory work, and collaborations or partnerships with universities and other governments. This included publishing the first comprehensive AI Strategy for NYC. This talk will be an informal survey of reflections on this experience from the perspective of a computer scientist new to government. It will emphasize aspects of government in general and this experience in particular that those who have not directly served in a government or policymaking position but are interested in ethical AI or AI policy might find surprising, interesting, or helpful.
Maria João Sousa (Cornell Tech) |
||
Maria João Sousa is a PiTech Startup Postdoc at Cornell Tech and Incoming Executive Director at Climate Change AI (CCAI), which is a global non-profit that catalyzes impactful work at the intersection of climate change and machine learning. She will defend her Ph.D. thesis in Mechanical Engineering from Instituto Superior Técnico (IST), Universidade de Lisboa in 2023. Her doctoral thesis focused on cooperative aerial robotics and artificial intelligence for wildfire detection and monitoring systems and was developed as a research fellow at both IDMEC in the Center of Intelligent Systems and at ADAI in the Forest Fire Research Center.
Her research interests are in the areas of computational intelligence, robotics, and networked systems. She was nominated for the UN Environment Young Champions of the Earth 2018 Prize for her project on decentralized intelligent sensor networks for fire detection and monitoring. She has co-organized several entrepreneurship events, including for international programs such as 3 Day Startup. More recently, she collaborated on the NeurIPS 2020 workshop side-event “Monitoring the Climate Crisis with AI, Satellites and Drones” and was on the organizing team of the “Tackling Climate Change with Machine Learning” workshop at ICML 2021, NeurIPS 2021 (lead organizer), and NeurIPS 2022. Title: Tackling Climate Change with Machine Learning Abstract: Climate change is one of the greatest challenges that society faces today, requiring rapid action from all corners. In this talk, I will first describe how machine learning can be a potentially powerful tool for addressing climate change, when applied in coordination with policy, engineering, and other areas of action. From energy to agriculture to disaster response, I will describe high-impact problems where machine learning can help through avenues such as distilling decision-relevant information, optimizing complex systems, and accelerating scientific experimentation. Second, I will dive into some of my own research in this area that leverages artificial intelligence and robotics to develop decentralized environmental monitoring systems for wildfire detection and monitoring. Specifically, I will briefly cover how networks of aerial robots can provide enhanced real-time wildfire intelligence at scale by leveraging cooperative robotics, remote sensing and AI. Then, I will end by presenting important considerations for developing and deploying work in this area, as well as ways to get involved in the community working at the intersection of climate change and AI.
|
||
Milind Tambe (Harvard University)
Milind Tambe is Gordon McKay Professor of Computer Science and Director of Center for Research in Computation and Society at Harvard University; concurrently, he is also Principal Scientist and Director “AI for Social Good” at Google Research. He is recipient of the AAAI Feigenbaum prize, IJCAI John McCarthy Award, AAMAS ACM Autonomous Agents Research Award, AAAI Robert S. Engelmore Memorial Lecture Award, and he is a fellow of AAAI and ACM. He is also a recipient of the INFORMS Wagner prize for excellence in Operations Research practice and Rist Prize from MORS (Military Operations Research Society). For his work on AI and public safety, he has received Columbus Fellowship Foundation Homeland security award and commendations and certificates of appreciation from the US Coast Guard, the Federal Air Marshals Service and airport police at the city of Los Angeles. Title: AI for social impact: Results from deployments for public health Abstract: For the past 15 years, my team and I have been advancing AI and multiagent systems research towards social impact, focusing on topics of public health, conservation and public safety. We have focused on addressing a key cross-cutting challenge: how to effectively deploy our limited intervention resources. In this talk, I will present results from work in using AI for addressing challenges in public health such as Maternal and Child care interventions, HIV prevention, and TB prevention. Achieving social impact in these domains often requires methodological advances. To that end, I will highlight key research advances in multiagent reasoning and learning, in particular in, restless multiarmed bandits, influence maximization in social networks, and decision-focused learning. In pushing this research agenda, our ultimate goal is to facilitate local communities and non-profits to directly benefit from advances in AI tools and techniques.
|
||
Angelique Taylor (Cornell Tech)
Angelique Taylor is an Assistant Professor in the Information Science Department at Cornell Tech and Director of the Artificial Intelligence and Robotics Lab (AIRLab) focusing on research at the intersection of robotics, computer vision, and artificial intelligence. The AIRLab designs intelligent systems that work alongside groups of people in real-world, safety-critical environments. These systems are realized through multi-robot systems, robot vision systems, AI, and extended reality devices. Before joining Cornell, Angelique was a Visiting Research Scientist at Meta Reality Labs Research working on AI to support multi-user collaboration in AR/VR. She received her Ph.D. in Computer Science and Engineering from the University of California San Diego in 2021. She has received the NSF GRFP, Microsoft Dissertation Award, the Google Anita Borg Memorial Fellowship, the Arthur J. Schmitt Presidential Fellowship, a GEM Fellowship, and an award from the National Center for Women in Information Technology (NCWIT). More information on her research can be found at angeliquemtaylor.com. Title: Perception and Decision-Making Systems for Human-Robot Teaming in Safety-Critical Environments Abstract: In this talk, I will present recent work on developing perception and decision-making systems that enable robots to team with groups of people. My core focus is on problems that robots encounter in human-robot teaming, including perceptions of human groups and social navigation, particularly in safety-critical environments. First, I will discuss how I developed computer vision methods that enable robots to detect and track their teammates in real-world settings. Building on this, I designed a social navigation system that enables robots to deliver materials to healthcare workers using acuity-aware image features to incorporate the severity of patients’ health while navigating in the ED. This work will help robots avoid interrupting in care delivery. My work will enable robots to work in safety-critical, human-centered environments, and ultimately help improve patient outcomes and alleviate clinician workload.
|
||
Pascal Van Hentenryck (Georgia Institute of Technology )
Pascal Van Hentenryck is the A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology and the Associate Chair for Innovation and Entrepreneurship. He is the director of the NSF AI Institute for Advances in Optimization, the director of the Socially Aware Mobility (SAM) and the Risk-Aware Market Clearing (RAMC) labs. Several of his optimization systems have been in commercial use for more than 20 years for solving logistics, supply-chains, and manufacturing applications. His current research focuses on machine learning, optimization, and privacy with applications in energy, mobility, and supply chains. Title: On-Demand Multimodal Transit Systems: From Concepts to Pilots Abstract: This talk is a high-level overview of research on On-Demand Multimodal Transit Systems, i.e., public transit systems that holistically integrates fixed routes and on-demand services. The talk covers the planning and operations of transit systems as well as MARTA Reach, a 6-month pilot program of ODMTS in Atlanta. It also discusses recent research how to integrate mode choices in the planning of large-scale ODMTS.
|
||
Ellen Vitercik (Stanford University)
Ellen Vitercik is an Assistant Professor at Stanford University with a joint appointment between the Management Science & Engineering department and the Computer Science department. Her research revolves around machine learning theory, discrete optimization, and the interface between economics and computation. Before joining Stanford, she spent a year as a Miller Fellow at UC Berkeley after receiving a PhD in Computer Science from Carnegie Mellon University. Her thesis won the SIGecom Doctoral Dissertation Award and the CMU School of Computer Science Distinguished Dissertation Award. Title: Leveraging Reviews: Learning to Price with Buyer and Seller Uncertainty Abstract: On online marketplaces, customers have access to hundreds of reviews for a single product. Buyers often use reviews from other customers that share their personal attributes—such as height for clothing or skin type for skincare products—to estimate their values, which they may not know a priori. Customers with few relevant reviews may hesitate to buy a product except at a low price, so for the seller, there is a tension between setting high prices and ensuring that there are enough reviews that buyers can confidently estimate their values. In this talk, we formulate this pricing problem through the lens of online learning and provide a no-regret learning algorithm.
|
||
Manxi Wu (Cornell University)
Manxi Wu is an Assistant Professor at Cornell University’s School of Operations Research and Information Engineering. Her research focuses on analyzing the strategic behavior of agents and its impact on societal scale systems using game theory, optimization, and machine learning techniques. She also works on designing information and market mechanisms to enhance system efficiency and resiliency. Manxi earned her Ph.D. in Social and Engineering Systems from MIT IDSS in 2021. Prior to joining Cornell, she was a research fellow at the Simons Program on Learning and Games, and a postdoctoral scholar at EECS, University of California, Berkeley from 2021-2022.Manxi is a recipient of Hammer Fellowship, Siebel Energy Scholarship, and Simons fellowship. Title: Improving Efficiency and Equity in Tolling: A market design approach. Abstract: Tolling is an effective approach to mitigate traffic congestion, but it can also lead to inefficiency and equity issues if not properly designed. In this talk, we first present an empirical study of the San Francisco Bay area, where we estimate the average willingness-to-pay of traveler populations with different income levels and re-design toll prices that account for its impact on different populations. Our new toll design improves efficiency by reducing average travel time and equity by minimizing differences in travel time among different income groups. Additionally, we propose a novel market design approach for incentivizing carpooling to efficiently share road capacity. Our approach combines ideas from combinatorial auction theory and dynamic network flows to analyze the existence, computation, and implementation of market equilibrium, addressing challenges such as integer and network constraints on dynamic trip organizations and riders’ heterogeneous and private preferences.
|
COMPANIES/ORGANIZATIONS
|
BetaNYC – Kate Nicholson, Director of Programs and Partnerships A partner project of the Fund for the City of New YorkBetaNYC helps New Yorkers access information & use technology. As a civic tech organization we seek to improve New Yorkers’ lives through open data, technology and design. We want New York’s governments to work for the people, by the people, for the digital era. Our work equips individuals and local communities to build a civically engaged technology ecosystem and provide for an honest and inclusive government.
Title: Stories from the streets, making NYC open data accessible, understandable, and comfortable to sit on. |
|
|
Replica – Steven Turell, Chief of Staff Replica is a data platform that provides insights about the built environment to public and private sector customers around the country. Replica spun out of Google’s Sidewalk Labs in 2019, and now provides information about mobility, economic activity, people, and infrastructure to more than 150 customers around the country including the MTA, Illinois Department of Transportation, and Waymo. Title: Data and Urban Decision Making Abstract: Steven will talk about Replica’s approach to making data more accessible, valuable, and actionable for the public sector, and the push to make data a fixed not variable cost for cities and states. The discussion will talk about the key role academia can play in getting government to think more creatively about its policy choices. Mr. Turell is the Chief of Staff at Replica, where his portfolio includes partnerships, data, and corporate development. Prior to Replica, Steven worked at Sidewalk Labs, on Google’s efforts to build the city of the future in Toronto, and at Deloitte, where he consulted on projects related to public sector innovation. He started his career teaching 7th grade on Oahu in the Teach for America Hawaii corps. |
|
|
Empire State Realty Trust – Dana Robbins Schneider, Senior Vice President, Director of Energy, Sustainability & ESG Empire State Realty Trust, Inc. (NYSE: ESRT) is a NYC-focused REIT that owns and manages a well-positioned property portfolio of office, retail, and multifamily assets in Manhattan and the greater New York metropolitan area. Owner of the Empire State Building – the “World’s Most Famous Building” – ESRT also owns and operates its iconic, newly reimagined Observatory Experience. Empire State Realty Trust achieves success for our tenants, brokers, investors, employees and other stakeholders. Our fully-modernized, energy-efficient spaces provide exceptional value to our current and prospective tenants and residents, and serve as a competitive advantage to us. As the leader in sustainability and energy efficiency with a focus on ROI-driven investment, ESRT’s commitment to indoor environment quality is unmatched. Abstract: Review ESRT’s strategic approach to ESG with a focus on our 2.0 Net Zero commitment. Content will include how we are using Data Science and Data Analytics to drive sustainability and decarbonization. Topics will include emissions, energy, water, waste, health and well-being and the data, benchmarking, technological, engineering, economic, policy, and science-based aspects of development of the projects, measurement of progress against goals, and compliance. Dana Robbins Schneider is Senior Vice President, Director of Energy, Sustainability and ESG for Empire State Realty Trust. Dana is responsible to define, lead and execute a comprehensive program for all company and property level energy and sustainability initiatives and industry leading best practices and to coordinate and develop the company’s ESG and wellness programs and reporting. Dana focuses on analyzing and implementing actionable measures which drive energy efficiency and performance at the whole building, systems and tenant level including proactive planning for LL97 and 80X50. We focus on measurable actionable impacts in energy, water, waste and indoor environmental quality to drive ROI and healthy buildings. Prior to joining ESRT, Dana led JLL’s Energy and Sustainability Projects team for the Americas, working on over 250 million square feet of impact projects over 18 years. Before this, Dana was a mechanical engineer at WSP. Dana graduated Phi Beta Kappa from the University of Virginia in 1999 and serves on the Real Estate Roundtable Sustainable Policy Advisory Committee, Urban Green Board of Directors, REBNY Sustainability Committee, USGBC LEED Steering Committee and is a LEED Fellow. Dana serves on the LL97 Technical Pathways for Commercial Buildings Working Group. |