- Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.
- Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization.
- Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Writes unit/integration tests, contributes to engineering wiki, and do the documentation for the datasets and processes
- Works closely with a team of frontend and backend engineers, product managers, and analysts.
- Able to work with tools related to Big data ecosystem like Hadoop and spark.
- Designs data integrations and data quality framework.
- Designs and evaluates open source and vendor tools for data lineage.
- Works closely with all business units and engineering teams to develop strategy for long term data platform architecture.
- Effective vendor management
Education and Experience Requirements:
- BS or MS degree in Computer Science or a related technical field
- 4+ years of Python development experience
- 4+ years of SQL / Hadoop / Spark development experience (No-SQL experience is a plus)
- 4+ years of experience with schema design and dimensional data modeling
- Ability in managing and communicating data warehouse plans to internal clients
- Experience designing, building, and maintaining data processing systems
- Experience working with MPP system on any size/scale
- Experience on batch and real time systems