We’re changing the way people think about transportation. Not that long ago we were just an app to request premium black cars in a few metropolitan areas. Now we’re a part of the logistical fabric of more than 500 cities around the world. Whether it’s a ride, a sandwich, or a package, we use technology to give people what they want, when they want it.
For the people who drive with Uber, our app represents a flexible new way to earn money. For cities, we help strengthen local economies, improve access to transportation, and make streets safer.
And that’s just what we’re doing today. We’re thinking about the future, too. With teams working on autonomous trucking and self-driving cars, we’re in for the long haul. We’re reimagining how people and things move from one place to the next.
We're bringing Uber to every major city in the world. We need your skills and passion to help make it happen! Be sure to check out the Uber Engineering Blog to learn more about the team.
What you’ll do
Own data expertise and data quality for the pipelines
Create and launch new data models that provide intuitive analytics to your customers
Design, develop and launch extremely efficient & reliable data pipelines to move data
Design and develop new systems and tools to enable folks to consume and understand data faster
Experience developing applications within the LAMP Stack environment
Work across multiple teams in high visibility roles and own the solution end-to-end
Experience with Hadoop/hive, vertica, redshift, presto, pinot/scuba and data warehouse technologies is preferred
Knowledge of SQL is a must
Bonus points if
BS/MS/PhD in Computer Science or a related field