Senior ETL Engineer
San Francisco, California, United States
The Team You’ll Work With
Data team at our client location is working on becoming a large part of the decision making for the company. We believe that our interesting data sets will set us apart and help us succeed as a data driven company. Members of the data team are working on understanding and making sense of data while partnering with product and business teams on helping drive direction with data. The team is currently composed of professionals in Data Science/ Machine Learning space, advanced Data Analytics space, and Data engineering. We like to partner with each other and with Cartans across the company to get our work done, and we like to constantly think about how we can improve. We also like to come up with new product ideas based on data.
The Problems You’ll Solve
As a member of this team, you will be responsible for creating clean, scalable and easy to consume data models and data pipelines. You will build ETLs that will allow the rest of the company answer questions they need in a self-service manner, and allow analysts and data scientists to quickly analyze and prototype new ideas. You will partner with the rest of the team on prototyping those new ideas and build scalable products. Examples of responsibilities will include:

Build resilient data pipelines based on internal and external data sources
Architect clean and scalable data models to support evolving business requirements
Design scalable ETLs for consistent metric definitions
Partner with ML/DS teams on building out data structures for training and productionalizing predictive models
Build or evaluate tooling for data accuracy detection and alerting
Partner with the rest of the team on prototyping and building scalable products driven by the data team
Partner with teams on identifying opportunities and building solutions to help in simplifying operations while producing rich and accurate data sets for us to use
Constantly identify opportunities for providing self-service tooling to our internal partners

The Impact You’ll Have
By building scalable self-service solutions you will enable easier and faster decision making. In addition, you will be able to increase productivity and accuracy of our data team, and operations and product teams as well
About You
Successful candidates in this role will have at least 4+ years of experience, and will always be on the lookout for the balance between fast delivery and building for scale. You don’t follow the status quo but look for ways to improve how we do things. You are able to talk to technical and business users and explain your work, and are able to be a good partner to your team and to your customers. Building relationships is a priority. Even though our toolstack (Airflow, DBT, Redshift & Looker) are a good start, you will always be in the know on the latest and greatest technology we could utilize. You concentrate on automation and self service. You are also excited to build new products, starting with ideas and all the way to execution. Example of problems you will solve include:

Building tools to automate data anomaly detection and alerting
Scaling our data infrastructure and developing software that allows for improved data processing and automation
Evaluating build vs buy tooling
Scaling Looker as a platform to solve operational use cases as well as increase self service adoption around the company
Evaluating and rearchitecting our data model to support existing and future products
Partnering with external teams to help in data modeling requirements to support analytics
Understanding the needs of external teams, identify pain points and opportunities, and come back with proposals on how data engineering practice can help
Rearchitecting solutions such as Amplitude to allow for faster and more accurate reporting
Partner with the rest of data team to develop best in class software solutions to stand up products based on our data

More jobs: