Posted:2 days ago|
Platform:
Work from Office
Full Time
Introduction to the Role:
Are you passionate about unlocking the power of data to drive innovation and transform business outcomes? Join our cutting-edge Data Engineering team and be a key player in delivering scalable, secure, and high-performing data solutions across the enterprise. As aData Engineer, you will play a central role in designing and developing modern data pipelines and platforms that support data-driven decision-making and AI-powered products. With a focus onPython,SQL,AWS,PySpark, andDatabricks, you'll enable the transformation of raw data into valuable insights by applying engineering best practices in a cloud-first environment.
We are looking for a highly motivated professional who can work across teams to build and manage robust, efficient, and secure data ecosystems that support both analytical and operational workloads.
Accountabilities:
Design, build, and optimize scalable data pipelines usingPySpark,Databricks, andSQLonAWS cloud platforms.
Collaborate with data analysts, data scientists, and business users to understand data requirements and ensure reliable, high-quality data delivery.
Implement batch and streaming data ingestion frameworks from a variety of sources (structured, semi-structured, and unstructured data).
Develop reusable, parameterized ETL/ELT components and data ingestion frameworks.
Perform data transformation, cleansing, validation, and enrichment usingPythonandPySpark.
Build and maintain data models, data marts, and logical/physical data structures that support BI, analytics, and AI initiatives.
Apply best practices in software engineering, version control (Git), code reviews, and agile development processes.
Ensure data pipelines are well-tested, monitored, and robust with proper logging and alerting mechanisms.
Optimize performance of distributed data processing workflows and large datasets.
Leverage AWS services (such as S3, Glue, Lambda, EMR, Redshift, Athena) for data orchestration and lakehouse architecture design.
Participate in data governance practices and ensure compliance with data privacy, security, and quality standards.
Contribute to documentation of processes, workflows, metadata, and lineage using tools such asData CatalogsorCollibra(if applicable).
Drive continuous improvement in engineering practices, tools, and automation to increase productivity and delivery quality.
Essential Skills / Experience:
4 to 6 yearsof professional experience inData Engineeringor a related field.
Strong programming experience withPythonand experience using Python for data wrangling, pipeline automation, and scripting.
Deep expertise in writing complex and optimizedSQLqueries on large-scale datasets.
Solid hands-on experience withPySparkand distributed data processing frameworks.
Expertise working withDatabricksfor developing and orchestrating data pipelines.
Experience withAWS cloudservices such asS3,Glue,EMR,Athena,Redshift, andLambda.
Practical understanding of ETL/ELT development patterns and data modeling principles (Star/Snowflake schemas).
Experience with job orchestration tools likeAirflow,Databricks Jobs, orAWS Step Functions.
Understanding of data lake, lakehouse, and data warehouse architectures.
Familiarity with DevOps and CI/CD tools for code deployment (e.g., Git, Jenkins, GitHub Actions).
Strong troubleshooting and performance optimization skills in large-scale data processing environments.
Excellent communication and collaboration skills, with the ability to work in cross-functional agile teams.
Desirable Skills / Experience:
AWS or Databricks certifications (e.g., AWS Certified Data Analytics, Databricks Data Engineer Associate/Professional).
Exposure todata observability,monitoring, andalertingframeworks (e.g., Monte Carlo, Datadog, CloudWatch).
Experience working in healthcare, life sciences, finance, or another regulated industry.
Familiarity with data governance and compliance standards (GDPR, HIPAA, etc.).
Knowledge of modern data architectures (Data Mesh, Data Fabric).
Exposure to streaming data tools like Kafka, Kinesis, or Spark Structured Streaming.
Experience with data visualization tools such as Power BI, Tableau, or QuickSight.
Agilisium
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowsaidapet, tamil nadu
8.0 - 13.0 Lacs P.A.
2.0 - 2.5 Lacs P.A.
kolkata, mumbai, new delhi, hyderabad, pune, chennai, bengaluru
3.0 - 6.0 Lacs P.A.
chennai
9.0 - 13.0 Lacs P.A.
saidapet, chennai, tamil nadu
Experience: Not specified
Salary: Not disclosed
Salary: Not disclosed
ahmedabad
4.54 - 5.98 Lacs P.A.
bengaluru, karnataka
Experience: Not specified
25.0 - 32.0 Lacs P.A.
noida, uttar pradesh
Salary: Not disclosed
bhubaneswar
Salary: Not disclosed