- LocationBengaluru, India
-
IndustryOil & Energy
What we offer:
Our Client, believes in rewarding our employees for their hard work. We offer competitive salaries, company pensions and performance related benefits. Our people can also take advantage of our extensive flexible benefits package and more.
How we work:
Our people are key to our success. Our core objective is to provide them with a supportive and entrepreneurial work environment that fosters collaboration. This allows our people to take responsibility and make optimal use of their skills. Together, we want to shape the future of energy.
About Us
We're a leading European energy company committed to powering the energy transition with speed, innovation, and resilience. Our Platform Engineering team is pivotal, providing the secure, scalable infrastructure, platforms, and tooling that enable rapid, high-quality software delivery for both our Advanced Solution Delivery team and Trading Citizen Developers across the trading organization.
Position Summary
As the (Senior) Data/DataOps Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines, managing data integration, and ensuring robust data quality and governance to enhance and support our systematic trading activities. This role is critical to ensuring the efficiency, reliability, and scalability of our core data infrastructure, directly supporting our strategic trading initiatives. The ideal candidate has a passion for creating high quality software, loves working with data in all its forms and shapes, enjoys solving complex problems, brings expertise in time series data and is adept at working with modern data platforms and tools.
Your Responsibilities
- Gain a deep understanding of data requirements related to energy trading business and translate them into technical designs.
- Design, develop, and maintain scalable data pipelines for both batch and streaming data, ensuring data quality and consistency to support data analytics initiatives.
- Develop and maintain robust data models that optimally support workloads of analysts and traders, ensuring data is well structured and easily accessible.
- Seamlessly integrate market and other data from various sources, including internal and external data feeds, APIs, databases, and streaming services.
- Implement data quality checks, validation processes, and monitoring solutions; maintain data governance and security standards across all data domains.
- Manage and maintain an up-to-date data catalog using tools like Collibra to ensure metadata is accurately documented and accessible.
- Develop and implement automation scripts and workflows to enhance efficiency and reduce manual intervention in data processing.
- Monitor and optimize the performance of data pipelines, ensuring efficient data processing and minimizing disruptions.
- Collaborate cross-functionally with traders, analysts, software engineers, and other stakeholders to understand data requirements and ensure that solutions are aligned with business needs.
- Leverage tools and platforms including Azure Databricks (utilizing Unity Catalog & Delta Lake), Snowflake, PostgreSQL, TimescaleDB, Kafka, and Flink with a strong focus on time series data.
- Stay updated on and experiment with emerging technologies like Delta Lake and Apache Iceberg to continuously enhance our Lakehouse architecture.
Your profile
Essential Qualifications:
- Bachelor's or Master's degree in Computer Science, Mathematics, Engineering, or a related quantitative discipline.
- 5+ years of proven expertise in Data/DataOps Engineering or related roles.
- Strong knowledge of software engineering best practices, object-oriented concepts, and the ins and outs of data-focused development.
- Expertise in utilizing (and implementing) various API types, including REST, GraphQL, WebSocket, and gRPC.
- Proficiency in Python, SQL, and data processing frameworks like Apache Spark.
- Proficiency in Git and version control systems.
- Experience with cloud platforms (e.g., Azure) and tools like Databricks, Snowflake, PostgreSQL, and Kafka.
- Experience with Docker and containerization technologies.
- Expertise in handling both event-based and aggregated time series data.
- Strong understanding of modern data governance frameworks, data modelling, data architecture, OLAP & OLTP systems and their application in dynamic environments.
Preferred Qualifications:
- Previous experience in an operational data team, preferably in energy trading or a related field.
- Experience with DevOps techniques, including CI/CD and infrastructure-as-code.
- Proficiency in at least one OOP language other than Python (e.g. Java, C#).
- Experience with Delta Live Tables (DLT) and/or dbt is a plus.
- Experience with web scraping techniques.
- Familiarity with data cataloging and quality monitoring solutions (e.g., Collibra).
- Experience in building Generative AI (GenAI) solutions, such as Retrieval-Augmented Generation (RAG) and Agentic AI.
Soft Skills and Cultural Fit:
- Excellent communication and collaboration skills to work effectively with technical teams and business stakeholders.
- Strong analytical thinking, problem-solving abilities.
- Demonstrated high level of initiative, self-motivation, and a proactive, self-starter mindset, with a strong drive to independently identify and solve challenges.
- A passion for continuous learning, innovation, knowledge sharing, and driving excellence in data engineering.
- Ability to work effectively in a cross-functional, fast-paced environment.
Check Your Resume for Match
Upload your resume and our tool will compare it to the requirements for this job like recruiters do.
Check for Match