Sr. Associate Application Developer

2 - 5 years

30 - 35 Lacs

Posted:-1 days ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

There is a substantial pipeline of work, including market-wide initiatives, Security focused transformation and major cloud migrations. We are seeking an experienced ETL Developer to support and evolve our enterprise data integration workflows. The ideal candidate will have deep expertise in Informatica PowerCenter, excellent hands-on experience with Azure Data Factory and Databricks, and a passion for building scalable, reliable ETL pipelines. This role is critical for both day-to-day operational reliability and long-term modernization of our data engineering stack in the Azure cloud.

What youll be DOING

What will your essential responsibilities include?

  • Maintain, monitor, and troubleshoot existing ETL workflows to ensure operational reliability and data accuracy.
  • Enhance and extend ETL processes to support new data sources, updated business logic, and scalability improvements.
  • Develop and orchestrate PySpark notebooks in Azure Databricks for data transformation, cleansing, and enrichment.
  • Configure and manage Databricks clusters for performance optimization and cost efficiency.
  • Implement Delta Lake solutions that support ACID compliance, versioning, and time travel for reliable data lake operations.
  • Automate data workflows using Databricks Jobs and Azure Data Factory (ADF) pipelines.
  • Design and manage scalable ADF pipelines, including parameterized workflows and reusable integration patterns.
  • Integrate with Azure Blob Storage and ADLS Gen2 using Spark APIs for high-performance data ingestion and output.
  • Ensure data quality, consistency, and governance across legacy and cloud-based pipelines.
  • Collaborate with data analysts, engineers, and business teams to deliver clean, validated data for reporting and analytics.
  • Participate in the full Software Development Life Cycle (SDLC) from design through deployment, with an emphasis on maintainability and audit readiness.
  • Develop maintainable and efficient ETL logic and scripts following best practices in security and performance.
  • Troubleshoot pipeline issues across data infrastructure layers, identifying and resolving root causes to maintain reliability.
  • Create and maintain clear documentation of technical designs, workflows, and data processing logic for long-term maintainability and knowledge sharing.
  • Stay informed on emerging cloud and data engineering technologies to recommend improvements and drive innovation.
  • Follow internal controls, audit protocols, and secure data handling procedures to support compliance and operational standards.
  • Provide accurate time and effort estimates for assigned development tasks, accounting for complexity and risk.

You will report to Application Manager.

What you will BRING

Were looking for someone who has these abilities and skills:

Required Skills and Abilities:

  • Exposure with Informatica PowerCenter, including mappings, workflows, session tuning, and parameterization.
  • Expertise in Azure Databricks + PySpark, including: Notebook development, Cluster configuration and tuning, Delta Lake (ACID, versioning, time travel), Job orchestration via Databricks Jobs or ADF, Integration with Azure Blob Storage and ADLS Gen2 using Spark APIs.
  • Excellent hands-on experience with Azure Data Factory: Building and managing pipeline, Parameterization and dynamic datasets & Notebook integration and pipeline monitoring.
  • Proficiency in SQL, PL/SQL, and scripting languages such as Python, Bash, or PowerShell.
  • Effective understanding of data warehousing, dimensional modeling, and data profiling.

Desired Skills and Abilities:

  • Familiarity with Git, CI/CD pipelines, and modern DevOps practices.
  • Working knowledge of data governance, audit trails, metadata management, and compliance standards such as HIPAA and GDPR.
  • Effective problem-solving and troubleshooting skills with the ability to resolve performance bottlenecks and job failures.
  • Awareness of Azure Functions, App Services, API Management, and Application Insights.
  • Understanding of Azure Key Vault for secrets and credential management.
  • Familiarity with Spark-based big data ecosystems (e.g., Hive, Kafka) is a plus.
  • Experience on Server Administration and DB optimization will be a plus.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Axa Xl logo
Axa Xl

Insurance and Reinsurance

Greenwich

RecommendedJobs for You

bengaluru east, karnataka, india