-
SalaryUp to INR 28 LPALocationBangalore, IndiaIndustryInformation Technology and ServicesJob Description
Summary role description:
Hiring for a hands-on Senior Software Engineer – Backend (Java / Python) for an Analytics and AI/ML platform provider
Company description:
Our client has sharp focus on cloud, data, and analytics, providing their clients the requisite strategies, tools, capabilities, and capacity for deriving competitive advantage from their data. Our client has been acknowledged as a valued top partner by global leaders in the cloud and data ecosystem – like Tableau, Salesforce, Snowflake and AWS. Having been in operations for more than a decade now, our client is highly valued by their customers – who place sustained repeat business – and by their competent, spirited, and well-rewarded employees.
Role details:
- Title / Designation: Senior Software Engineer
- Location: Bengaluru / Gurugram / Mohali
Role & responsibilities:
- Design and develop largescale Analytics and Reporting solutions, using Java/J2EE, Kafka, Spark, Hadoop ecosystem, Python, Pyspark, Glue, API, SQL, and No-SQL data sources (MySQL, NoSQL, Hadoop, MongoDB, AWS, and Azure data etc.), ETL/ELT and Data Warehouses and integration use cases
- Create and maintain optimal data pipeline architecture, and build pipelines for both on premise, cloud data migration (e.g., Snowflake, Redshift, Synapse, Databricks) and hybrid scenario
- Ingest data into cloud repositories from sources including flat files, SQL server, Kafka, CDC, and Web APIs
- Use developer tools, modern IDE’s (Visual Studio Code, Intellij), CI/CD, DevOps, Github, Terraform, monitoring tools and engineering cloud migration solutions to build automation and efficiencies
Candidate requirements:
- Bachelors’ or Masters’ degree from an accredited college/university in business-related or technology-related field.
- Relevant certifications in AWS g. Cloud Practitioner, Solutions Architect Associate
- Migration and Data Integration strategies/certification
- Experienced in designing and developing data pipelines using PySpark in any Public Cloud g. AWS, GCP, Azure etc or hybrid environments, using AWS Glue, Glue Studio, Blueprints etc
- Must have 2 - 8 years of IT experience / Engineering/Tech Stacks / Frameworks / Programming Languages and data engineering, ETL/ELT, Data Analytics and Reporting
- Must have 3 - 5 years of experience programming in a backend language (Java, J2EE, Kafka, Spark, Python, Pyspark, Glue, API etc.)
- Experience in building pipelines to migrate data from on-prem to cloud data repositories (e.g., Snowflake, Redshift, Synapse, Databricks)
- Experience in ingesting data into cloud repositories from sources including flat files, SQL server, Kafka, CDC, and Web APIs
- Experience in working with modern IDE’s (such as Visual Studio Code, Intellij)
- Experience with any MPP data warehouses (Snowflake, Big Query, Redshift)
Selection process:
- 2 technical interviews
- Managerial interview
- HR discussions
It has come to our attention that clients and candidates are being contacted by individuals fraudulently posing as Antal representatives. If you receive a suspicious message (by email or WhatsApp), please do not click on any links or attachments. We never ask for credit card or bank details to purchase materials, and we do not charge fees to jobseekers.
