Home
Jobs

214 Data Engineer Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 8 years

3 - 3 Lacs

Hyderabad

Hybrid

Naukri logo

JD : Data Enigneer Experience: 5-8 years of in-depth, hands-on expertise with ETL tools and logic, with a strong preference for IDMC (Informatica Cloud). Application Development/Support : Demonstrated success in either application development or support roles. Python Proficiency : Strong understanding of Python, with practical coding experience AWS: Comprehensive knowledge of AWS services and their applications Airflow : creating and managing Airflow DAG scheduling. Unix & SQL : Solid command of Unix commands, shell scripting, and writing efficient SQL scripts Analytical & Troubleshooting Skills : Exceptional ability to analyze data and resolve complex issues. Development Tasks : Proven capability to execute a variety of development activities with efficiency Insurance Domain Knowledge: Familiarity with the Insurance sector is highly advantageous. Production Data Management : Significant experience in managing and processing production data Work Schedule Flexibility: Open to working in any shift, including 24/7 support, as require

Posted 1 month ago

Apply

4 - 8 years

9 - 12 Lacs

Pune, Delhi NCR, Bengaluru

Hybrid

Naukri logo

Job Description- Required overall experience of 4 to 8 years for Gurugram location. Experience with SSIS + GIT is must • Proficient Experience on designing, building and operationalizing large-scale enterprise data solutions and applications using one or more of Azure data and analytics services Azure Data Factory, DataBricks,Azure Datalake, Azure Dataflow, Azure Synapse, Azure SQL,Cloud storage, Cloud Functions, Triggers. Hands on experience with SnowFlake . • Working knowledge of at least one of Spark or PySpark or Hive with fair exposure of Big Data. • Excellent communicator (written and verbal formal and informal) • Azure Certification preferred

Posted 2 months ago

Apply

5 - 10 years

20 - 27 Lacs

Gurgaon

Hybrid

Naukri logo

Data Product Engineering Specialist - 5+ Years - Gurgaon Location Gurgaon We are looking for an experienced Data Product Engineering Specialist with a strong foundation in building scalable data pipelines, frameworks, and data products using cloud platforms like Azure and Databricks. If you're passionate about data engineering, governance, and automationthis is your opportunity to work on impactful projects in a highly collaborative environment. Your Future Employer A global leader in digital transformation and analytics, providing data-driven solutions to top-tier clients across finance, insurance, and technology sectors. The organization focuses on delivering next-gen data products while ensuring robust data governance and cloud-first engineering practices. Responsibilities Designing and maintaining scalable, governed data products for analytics, AI/ML, and business operations Building robust data pipelines and reusable frameworks using tools like Databricks and Spark Collaborating with Product Managers and SMEs to align data architecture with business goals Implementing cloud-native solutions while ensuring compliance with data privacy and security policies Developing and executing automated tests to ensure data quality and platform stability Optimizing data solutions for performance and cost efficiency through automation and modern engineering practices Requirements Bachelor's degree in Computer Science, Engineering, or a related technical discipline 5+ years of hands-on experience in data engineering, pipeline development, and cloud technologies (Azure preferred) Proficiency in SQL, Python, Spark, and modern cloud platforms like Databricks Strong understanding of data governance, metadata management, and real-time streaming tools like Kafka Exposure to CI/CD, DevOps, and agile product development environments What is in it for you Work on cutting-edge data platforms and engineering tools in a cloud-first ecosystem Collaborate across product, technology, and business teams to deliver enterprise-level solutions Accelerate your career in a dynamic, high-growth environment with continuous learning opportunities Reach us If you think this role is aligned with your career, kindly write to me at shreya.mohan@crescendogroup.in along with your updated CV for a confidential discussion on the role.

Posted 2 months ago

Apply

1 - 6 years

15 - 25 Lacs

Delhi NCR, Gurgaon, Mumbai (All Areas)

Work from Office

Naukri logo

Job Purpose:Provide in-depth analysis and feedback to supervisors and product, policy, collections, and fraud risk teams within the Bank. Build scorecards to assess portfolio behaviors, with a strong emphasis on credit analytics and digital innovation in lending processes.Job Responsibilities: Core Credit Analytics Focus: Analyze loan portfolio behaviors, build and maintain scorecards, and provide actionable insights to enhance decision-making for product, policy, collections, and fraud risk teams. Digital Innovation in Lending: Understand the loan processing environment for various products from an innovation perspective, identify digital opportunities, and implement projects to optimize lending journeys based on opportunities identified by the Team Lead. Project Management: Manage the complete lifecycle of digital projects, including requirement gathering, working with IT/BTG for vendor identification, FPN approval, UAT testing, and other support to ensure successful project go-live. Collaboration with IT: Work closely with IT counterparts to ensure seamless project management and execution of credit analytics and digital initiatives. Fintech Engagement: Explore and engage with Fintechs, evaluate their solutions, and implement those that align with organizational goals, particularly in credit and lending areas. Educational Qualifications: MBA, CA, FRM, or CFA (mandatory). Key Skills: Minimum 1 year of hands-on experience with Python (non-negotiable) for data analysis, modeling, and automation. Strong technical and analytical skills with expertise in credit analytics. Proficient in Excel modeling and financial statement analysis. Excellent written and verbal communication skills. In-depth knowledge of banking products, processes, accounting principles, and regulatory frameworks. Awareness of competition and current trends in the financial industry. Additional Requirements: Ability to work in a fast-paced environment and deliver actionable insights. Strong problem-solving skills and attention to detail. Why Join Us? Opportunity to work on cutting-edge credit analytics projects and drive digital transformation in lending. Collaborate with top-tier teams in a dynamic banking environment. Grow your career with a focus on innovation and strategic impact. How to Apply:Interested candidates with at least 1 year of Python experience and the specified qualifications are encouraged to apply. Please upload your resume and cover letter highlighting your Python experience and credit analytics background.

Posted 2 months ago

Apply

4 - 7 years

5 - 10 Lacs

Bengaluru, Hyderabad, Mumbai (All Areas)

Work from Office

Naukri logo

Data Engineer with 5+ years of experience Data Engineer with 4- 5 years of experience Location: Mumbai/Bangalore Availability: Candidates available within 3-4 weeks. Key Requirement: Excellent communication skills. Please find the below job description for more details. Job Title: Data Engineer Experience: 3-6 years Location: Preference Hyderabad, Other Deloitte Office Locations Employment Type: Full-time Job Summary: We are seeking a skilled Data Engineer with expertise in Azure, Kubernetes, Terraform, and Ansible. The ideal candidate will be responsible for designing, deploying, and maintaining data infrastructure, ensuring scalability, security, and efficiency. Key Responsibilities: Data Infrastructure Management: Design, deploy, and manage Azure-based data environments. Implement data best practices for high availability, scalability, and security. Data Pipeline & Orchestration: Manage and deploy data pipelines using Kubernetes. Optimize data workflows and ensure smooth orchestration. Infrastructure as Code: Automate infrastructure provisioning using Terraform. Maintain and improve IaC pipelines to support data engineering workflows. Configuration Management & Automation: Implement and manage configuration automation using Ansible. Develop automation scripts to improve data resource management. Monitoring & Optimization: Set up monitoring, logging, and alerting for data infrastructure. Optimize data resources for cost-efficiency and performance. Security & Compliance: Ensure data security best practices are followed. Implement role-based access control (RBAC) and compliance policies - Collaboration & Troubleshooting: Work closely with data scientists, analysts, and development teams to support data-driven applications. Diagnose and resolve data infrastructure issues efficiently. Required Skills & Experience: Strong expertise in Microsoft Azure cloud services. Proven experience in designing and building data pipelines and ETL processes. Hands-on experience with Kubernetes for container orchestration. Proficiency in Terraform for infrastructure automation. Experience with Ansible for configuration management. Proficiency in Python and Apache Spark. Solid understanding of data modeling, data warehousing, and delta lakes. Good understanding of networking, security, and data governance. Strong problem-solving and troubleshooting skills. Good to Have: Experience with AWS or Google Cloud Platform (GCP). Knowledge of CI/CD pipelines and DevOps best practices. Familiarity with Docker, Helm, and data-native technologies. Experience with Kafka streaming. Certifications such as Azure Data Engineer Associate.

Posted 2 months ago

Apply

5 - 10 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com with the below details ASAP. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate- Exp in Snowflake- Exp in Matillion- 2:00PM-11:00PM-shift timings (free cab facility-drop) +food Please let me know, if any of your friends are looking for a job change. Kindly share the references. Only Serving/ Immediate candidates can apply. Interview Process-2 Rounds(Virtual)+Final Round(F2F) Please Note: WFO-Work From Office (No hybrid or Work From Home) Mandatory skills : Snowflake, SQL, ETL, Data Ingestion, Data Modeling, Data Warehouse,Python, Matillion, AWS S3, EC2 Preferred skills : SSIR, SSIS, Informatica, Shell Scripting Venue Details: Sun Technology Integrators Pvt Ltd No. 496, 4th Block, 1st Stage HBR Layout (a stop ahead from Nagawara towards to K. R. Puram) Bangalore 560043 Company URL: www.suntechnologies.com Thanks and Regards,Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com

Posted 2 months ago

Apply

7 - 12 years

25 - 32 Lacs

Pune, Bengaluru

Hybrid

Naukri logo

Job Role & responsibilities: - Understanding operational needs by collaborating with specialized teams Supporting key business operations. This involves architecture designing, building and deploying data systems, pipelines etc Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Lead a team of developers, implement Sprint planning and executions to ensure timely deliveries Technical Skill, Qualification & experience required:- 7-10 years of experience in Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Data Engineer, Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services Azure Architect and implement ETL and data movement solutions. Migrate data from traditional database systems to Cloud environment Strong hands-on experience for working with Streaming dataset Building Complex Notebook in Databricks to achieve business Transformations. Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only

Posted 2 months ago

Apply

3 - 6 years

10 - 20 Lacs

Pune

Remote

Naukri logo

Rudder Analytics is looking for Data Engineers (Data ETL/Talend/DB/Cloud) at Pune, with 3-6 yrs of experience. Informatica experience will not be considered for this role. Please see details at https://bit.ly/3ZLheEo for job code ED-SA-01 Required Candidate profile Ability to lead a team and manage projects independently. Eye for detail and great problem-solving skill. Ability to thrive in a fast paced and demanding environment of a start-up.

Posted 2 months ago

Apply

3 - 7 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Role & responsibilities Snowflake: Expertise in building, optimizing, and managing data warehousing solutions on Snowflake. SQL: Strong knowledge of SQL for querying and managing relational databases, writing complex queries, stored procedures, and performance tuning. Preferred candidate profile Python: Proficiency in Python for scripting, automation, and integration within data pipelines. • Experience in developing and managing ETL processes, and ensuring data accuracy and performance. • Hands-on experience with data migration and integration processes across cloud platforms. • Familiarity with data security and governance best practices. • Strong problem-solving skills with the ability to troubleshoot and resolve data-related issues. Perks and benefits

Posted 2 months ago

Apply

7 - 12 years

15 - 25 Lacs

Hyderabad

Hybrid

Naukri logo

Hiring Data_Engineer with Airflow Need 5+ yrs of exp consultants Location: Hyderabad Inida with Hybrid Type: Fulltime Required Skills: Strong Airflow Or Python Skills (for the Platform Uplifting Work) along with Informatica skills. Informatica Power Centre, Power Exchange for CDC. Advanced SQL Skills. We require the candidate to have a minimum of 6 Years of experience in Data Engineer The candidate must have hands-on experience in Airflow for at least 4 years It is not a testing role but a data developer sort of role for platform uplifting Python for data acquisition from different APIs. Unix Shell Scripting.Role & responsibilities

Posted 2 months ago

Apply

6 - 11 years

8 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Data Engineer with ADF ADB (MRF00825) - J48819 ROLE: Data Engineer Location: Anywhere in India-Work from home All the below skills/exp are mandatory Total exp: Exp in/as Data Engineer: Exp in Azure Data Factory: Exp in Azure Data Bricks: Exp in PowerBI: Exp in PySpark: Exp in Python: Required Candidate profile Candidate Experience Should Be : 6 To 15 Candidate Degree Should Be : BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MCA,MCS,ME-Comp/IT,ME-Other,MIS,MIT,MSc-Comp/IT,MS-Comp/IT,MSc-Other,MS-Other,MTech-Comp/IT,MTech-Other

Posted 2 months ago

Apply

4 - 7 years

6 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Software Engineer\Sr. Software Engineer - J48459 Knowledge/ Qualifications/ Technical Competencies Knowledge, Skills & Abilities: Develop and maintain automated test suites, using Python/Scala/C# Hands-on experience in writing scripts in Python/Scala/C# Simulate E2E scenarios Data factory, Databricks and SQL Database Experience in ETL Testing, Database Testing, Data Quality, System Testing, Test Procedures Proficient in SQL, Python, Azure Data Factory and Databricks Proficient on CI/CD integration Familiar with different data formats Parquet Json Delimited Files Familiar with Data modeling and Data pipelines concept Excellent hands-on experience in test management and defect management with tools like Azure Devops Perform manual testing and run automated test cases to verify results. Good to have exposure on Power BI Report Testing Job Responsibilities Resource must have good oral and written communication skills and be able to work both independently and within a team framework Understand Business requirement and actively provide inputs from Data perspective Understand the underlying data and flow of data Perform E2E test activities, working closely with data engineers and data scientists on complex and innovative analytics applications. Design and perform test plans for testing system back-end components, ensuring product quality and coverage. Required Candidate profile Candidate Experience Should Be : 4 To 7 Candidate Degree Should Be : BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MCA

Posted 2 months ago

Apply

5 - 8 years

8 - 12 Lacs

Pune, Bengaluru, Hyderabad

Work from Office

Naukri logo

Skills : Data Governance, Design, Data engineer, Data management, NYDFS,Notice Period: 0 - 45 daysLocation Pune/Bangalore/Hyderabad / PAN INDIA

Posted 2 months ago

Apply

5 - 7 years

8 - 18 Lacs

Chennai, Pune, Bengaluru

Work from Office

Naukri logo

Roles and Responsibilities : Design, develop, test, deploy and maintain large-scale data pipelines using AWS services such as S3, Lambda, Step Functions. Collaborate with cross-functional teams to gather requirements and design solutions for complex data processing needs. Develop high-quality code in Python using PySpark and PostgreSQL to extract insights from large datasets. Troubleshoot issues related to data quality, performance, and scalability in real-time. Job Requirements : 5-7 years of experience in a similar role as Data Engineer or Software Development Engineer (SDE). Strong proficiency in Python programming language with expertise in working with AWS services like S3, Lambda, Step Functions. Experience with big-data technologies like Apache Spark (PySpark) and relational databases like PostgreSQL.

Posted 2 months ago

Apply

8 - 13 years

30 - 40 Lacs

Bengaluru, Hyderabad

Work from Office

Naukri logo

Design, develop, and maintain data pipelines in Snowflake. Perform data transformations, mappings, and scheduling of ETL processes. Set up and manage dbt models to ensure data quality and consistency. Monitor and troubleshoot data jobs to ensure seamless operation. Collaborate with data analysts and engineers to optimize data workflows. Implement best practices for data storage, retrieval, and security. Tech Stack - AWS Big Data Stack Expertise in ETL, SQL, Python and AWS tools like Redshift,S3, Glue, Data pipeline, Scala, Spark, Lambda is a must. Good to have knowledge on Glue Workflows, Step Functions, Quick sight, Athena, Terraform and Dockers. Responsibilities -Assists in the analysis, design and development of a roadmap, design pattern, and implementation based upon a current vs. future state from a architecture viewpoint. Participates in the data related technical and business discussions relative to future serverless architecture. Responsible for working with our Enterprise customers and migrate data into Cloud. Set up scalable ETL process to move data into Cloud warehouse. Deep understanding in Data Warehousing, Dimensional Modelling, ETL Architect, Data Conversion/Transformation, Database Design, Data Warehouse Optimization, Data Mart Development etc. .ETL, SSIS, SSAS TSQL

Posted 2 months ago

Apply

5 - 10 years

18 - 32 Lacs

Bhubaneshwar, Nagpur, Visakhapatnam

Work from Office

Naukri logo

Role & responsibilities 1. Strong experience in Azure data Engineering 2. Experience in Databricks 3. Experience in Python/Pyspark

Posted 2 months ago

Apply

5 - 10 years

15 - 22 Lacs

Pune

Work from Office

Naukri logo

Greetings from "HCL Software" "HCL Software: - Is a Product Development Division of HCL Tech: That operates its primary Software business. At HCL Software we Develop, Market, Sell and Support over 20 Product families in the areas of Customer Experience, Digital Solutions, Secure DevOps, Security & Automation. About Unica Product: - HCL Unica is Cloud Native: Adopt the whole new HCL Unica - Cloud native integrated marketing platform that can be deployed with the choice and flexibility to scale on any cloud & any environment - public, private, or hybrid - in minutes. Note: Are you available for a F2F Interview on 12th April (Saturday) _Pune. We are looking for a Sr. Data Engineer (ETL & Python in our Unica Product team (Pune Location) with 5+ years of experience who possess the following skills: Mandate Skill: Btech/BE in Computer Science or related technical field or proof of exceptional skills in related fields with practical software engineering experience. Programming languages: Python, Spark, SQL, DBT. Database: Preferred (PostgreSQL, MongoDB), or any RDBMS, NOSQL databases. orchestration: Apache Airflow, Prefect, NIFI Cloud tech: AWS (e.g., S3, Redshift, Glue), Google Cloud Platform (e.g., BigQuery, Cloud Composer) Streaming: Apache Kafka / google cloud pub/sub Devops: Docker / Kubernetes, GIT. Role Responsibility: - Develop and manage data pipelines for extracting, loading, and transforming data from multiple sources. Work with open-source and cloud-based databases (e.g., PostgreSQL, Snowflake, BigQuery, Redshift). Automate database operations and ETL tasks using programming languages such as Python and frameworks like Spark. Implement CI/CD practices and version control to streamline deployments. Ensure efficient and reliable orchestration using tools like Apache Airflow, Prefect, or Dragster. Experience working on API integration and real time streaming. Role Description: - A Data Engineer is responsible for designing, building, and maintaining robust data pipelines and architectures that enable the collection, transformation, and storage of large datasets. Ensure data quality and reliability, support data-driven decision-making, and facilitate the integration of various data sources into centralized systems.

Posted 2 months ago

Apply

4 - 9 years

10 - 20 Lacs

Pune

Work from Office

Naukri logo

Design, develop, maintain efficient data processing pipelines using PySpark. Implement best practices for ETL processes, ensuring high-quality & secure data. Monitor, troubleshoot, resolve issues related to data pipelines & infrastructure. Required Candidate profile Exp in PySpark & Python. Exp with big data frameworks like Hadoop, Spark, or Kafka. Exp in working with cloud platforms such as AWS , Azure, or GCP. Exp with data modeling & working with databases .

Posted 2 months ago

Apply

5 - 10 years

0 - 1 Lacs

Mumbai Suburbs, Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Experience in Data Steward, data management, data governance, data quality, and data engineering, leadership roles, Collibra, Alation, Data privacy, compliance. Loc- Mumbai- Thane Exp- 4+ yrs NP -1M- 45 days Apply/ Share to preethi.kumar@harjai.com

Posted 2 months ago

Apply

7 - 9 years

15 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Data Solution Design and Development: o Architect, design, and implement end-to-end data pipelines and systems that handle large-scale, complex datasets. o Ensure optimal system architecture for performance, scalability, and reliability. o Evaluate and integrate new technologies to enhance existing solutions. o Implement best practices in ETL/ELT processes, data integration, and data warehousing. 2. Project Leadership and Delivery: o Lead technical project execution, ensuring timelines and deliverables are met with high quality. o Collaborate with cross-functional teams to align business goals with technical solutions. o Act as the primary point of contact for clients, translating business requirements into actionable technical strategies. 3. Team Leadership and Development: o Manage, mentor, and grow a team of 5 to 7 data engineers. o Conduct code reviews, validations, and provide feedback to ensure adherence to technical standards. o Provide technical guidance and foster an environment of continuous learning, innovation, and collaboration. 4. Optimization and Performance Tuning: o Analyze and optimize existing data workflows for performance and cost-efficiency. o Troubleshoot and resolve complex technical issues within data systems. 5. Adaptability and Innovation: o Embrace a consulting mindset with the ability to quickly learn and adopt new tools, technologies, and frameworks. o Identify opportunities for innovation and implement cutting-edge technologies in data engineering. o Exhibit a "figure it out" attitude, taking ownership and accountability for challenges and solutions. 6. Client Collaboration: o Engage with stakeholders to understand requirements and ensure alignment throughout the project lifecycle. o Present technical concepts and designs to both technical and non-technical audiences. o Communicate effectively with stakeholders to ensure alignment on project goals, timelines, and deliverables. 7. Learning and Adaptability: o Stay updated with emerging data technologies, frameworks, and tools. o Actively explore and integrate new technologies to improve existing workflows and solutions. 8. Internal Initiatives and Eminence Building: o Drive internal initiatives to improve processes, frameworks, and methodologies. o Contribute to the organizations eminence by developing thought leadership, sharing best practices, and participating in knowledge-sharing activities. Qualifications Education: o Bachelor’s or Master’s degree in computer science, Data Engineering, or a related field. o Certifications in cloud platforms such as Snowflake Snowpro, Azure Data Engineer is a plus. Experience: o 7 to 10 years of experience in data engineering with hands-on expertise in data pipeline development, architecture, and system optimization. o Proven track record in leading data engineering teams and managing end-to-end project delivery. Technical Skills: o Expertise in programming languages such as Python, Scala, or Java. o Proficiency in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. o Solid understanding of database systems (relational and NoSQL) and data modeling techniques. o Hands-on experience of 2+ years in designing and developing data integration solutions using Matillion and/or dbt. o Strong knowledge of data engineering and integration frameworks. o Expertise in architecting data solutions. o Successfully implemented at least two end-to-end projects with multiple transformation layers. o Good grasp of coding standards, with the ability to define standards and testing strategies for projects. o Proficiency in working with cloud platforms (AWS, Azure, GCP) and associated data services. o Enthusiastic about working in Agile methodology. o Possess a comprehensive understanding of the DevOps process including GitHub integration and CI/CD pipelines. o Experience working with containerization (Docker), and orchestration tools (such as Airflow, Control-M). Soft Skills: o Exceptional problem-solving and analytical skills. o Strong communication and interpersonal skills to manage client relationships and team dynamics. o Ability to thrive in a consulting environment, quickly adapting to new challenges and domains. o

Posted 2 months ago

Apply

7 - 12 years

20 - 30 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

Technical Skills: * Strong proficiency in Python, with an emphasis on clean, modular, and well-documented code. * Proficient in Spark (PySpark and SparkSQL). * Expertise in SQL, JIRA, Git, and GitHub. - Good Communication Skills: Able to explain complex technical concepts clearly and concisely to both technical and non-technical audiences. - Azure Cloud Expertise: Hands-on experience with designing and implementing scalable and secure data processing pipelines using Azure cloud services and tools like Databricks or Azure Synapse Analytics. - Azure Data Management: Experience managing and optimizing data storage within Azure using services like Azure SQL Data Warehouse and Azure Cosmos DB. - ML Experience: Experience in deploying and maintaining ML models in production environments.

Posted 2 months ago

Apply

4 - 9 years

6 - 12 Lacs

Chennai

Hybrid

Naukri logo

Data Wrangling: Query, join, manipulate data, automate processes Business/Marketing Analysis: Response, sales, incremental sales, net profit, customer profiling Required Candidate profile Technical Skills: SQL, Alteryx, Qlik, Hadoop, GCP, Microsoft Office SME in Sales, Business, Financial Reporting, Customer Analytics, Customer Experience Analytics, Program Analytics Develop

Posted 2 months ago

Apply

10 - 18 years

30 - 35 Lacs

Coimbatore

Work from Office

Naukri logo

We are looking for people who have experience in digital implementations in cloud platforms, leading architecture design and discussions. ETL SME, SQLSnowflak,e and Data Engineering skills Alert monitoring, scheduling, and auditing knowledge Nice to have: Experience with agile,working incompliance regulated environments, exposure to manufacturing IIoT data 8-10 years of relevant experience

Posted 2 months ago

Apply

5 - 8 years

6 - 9 Lacs

Pune

Work from Office

Naukri logo

Skills Required: Strong proficiency in Pyspark , Scala , and Python Experience with AWS Glue Experience Required: Minimum 5 years of relevant experience Location: Available across all UST locations Notice Period: Immediate joiners

Posted 2 months ago

Apply

1 - 5 years

6 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

We're Hiring: AWS Data Engineer (6 months- 5 years Experience) Location: Bangalore Experience Required:6 months- 5 years Are you passionate about cloud technologies and working with big data? We are looking for a skilled AWS Data Engineer to join our team in Bangalore. If you have experience in designing, building, and optimizing data pipelines using AWS cloud technologies, we want to hear from you! Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines using AWS technologies. Work with PySpark and SQL to process large datasets efficiently. Manage AWS services like Redshift, EMR, Airflow, CloudWatch, and S3 for data processing and orchestration. Implement CI/CD pipelines using Azure DevOps. Monitor and optimize data workflows for performance, cost, and reliability. Collaborate with cross-functional teams to ensure smooth data integration. Required Skills: Programming & Scripting: SQL, PySpark, Python AWS Tools & Services: Apache Airflow, Redshift, EMR, CloudWatch, S3, Jupyter Notebooks DevOps & CI/CD: Azure DevOps, Git, Unix Commands Preferred Qualifications: Experience in performance tuning data pipelines and SQL queries. Knowledge of data lake and data warehouse architecture. Strong problem-solving skills. Understanding of data security, encryption, and cloud access control. This is a fantastic opportunity to work on cutting-edge data technologies in a dynamic, growing team. If you're a tech enthusiast who thrives in a collaborative environment, apply now! Apply Now or send your resume to career@ahanait.com/amruthavarshini.kn@ahanait.com/sandhya.yashasvi@ahanait.com Or reach out to us on 9845222775/ 9845267997 / 7760957879

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies