Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
13 - 20 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Work from Office
Job Description: Develop, and implement ETL processes using Python and SQL to extract, transform, and load data from various sources into our data warehouse. Optimize and maintain existing ETL workflows and data pipelines to improve performance and scalability. Design, develop, and maintain efficient, reusable, and reliable Python code and should support in python version upgrade activities. Collaborate with cross-functional teams to understand data requirements and ensure data integrity and quality. Monitor and troubleshoot data processing systems to ensure timely and accurate data delivery. Develop and maintain documentation related to ETL processes, data models, and workflows. Participate in code reviews and provide constructive feedback to team members. Stay up-to-date with industry trends and emerging technologies to continuously improve our data engineering practices. Skills & Qualifications: Bachelors degree in IT, computer science, computer engineering, or similar Proven experience as a Data Engineer or ETL Developer, with a focus on Python and SQL. Minimum 5 years of experience in ETL Proficiency in programming languages such as Python for data engineering tasks. Should be able to support in python version upgrade activities. Strong understanding of ETL concepts and data warehousing principles. Proficiency in writing complex SQL queries and optimizing database performance. Familiarity with cloud platforms such as Azure, or OCI is a plus. Demonstrated experience in design and delivering data platforms for Business Intelligence and Data Warehouse. Experience with version control systems, such as Git. Familiar with Agile methodology and Agile working environment. Ability to work alone with POs, BAs, Architects
Posted 2 months ago
7.0 - 11.0 years
30 - 35 Lacs
Bengaluru
Work from Office
1. The resource should have knowledge on Data Warehouse and Data Lake 2. Should aware of building data pipelines using Pyspark 3. Should be strong in SQL skills 4. Should have exposure to AWS environment and services like S3, EC2, EMR, Athena, Redshift etc 5. Good to have programming skills in Python
Posted 2 months ago
6.0 - 11.0 years
35 - 50 Lacs
Pune, Gurugram, Delhi / NCR
Hybrid
Role: Snowflake Data Engineer Mandatory Skills: #Snowflake, #AZURE, #Datafactory, SQL, Python, #DBT / #Databricks . Location (Hybrid) : Bangalore, Hyderabad, Chennai, Pune, Gurugram & Noida. Budget: Up to 50 LPA' Notice: Immediate to 30 Days Serving Notice Experience: 6-11 years Key Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory , Snowflake , and DBT . Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Work with stakeholders to understand business requirements and translate them into technical solutions. Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Maintain and enforce data quality, governance, and documentation standards. Collaborate with data analysts, architects, and DevOps teams in a cloud-native environment. Must-Have Skills: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets. Good-to-Have Skills: Experience with Azure Data Lake , Azure Synapse , or Azure Functions . Familiarity with Python or PySpark for custom data transformations. Understanding of CI/CD pipelines and DevOps for data workflows. Exposure to data governance , metadata management , or data catalog tools. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 5+ years of experience in data engineering roles using Azure and Snowflake. Strong problem-solving, communication, and collaboration skills.
Posted 2 months ago
6.0 - 11.0 years
35 - 50 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role: Snowflake Data Engineer Mandatory Skills: #Snowflake, #AZURE, #Datafactory, SQL, Python, #DBT / #Databricks . Location (Hybrid) : Bangalore, Hyderabad, Chennai, Pune, Gurugram & Noida. Budget: Up to 50 LPA' Notice: Immediate to 30 Days Serving Notice Experience: 6-11 years Key Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory , Snowflake , and DBT . Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Work with stakeholders to understand business requirements and translate them into technical solutions. Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Maintain and enforce data quality, governance, and documentation standards. Collaborate with data analysts, architects, and DevOps teams in a cloud-native environment. Must-Have Skills: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets. Good-to-Have Skills: Experience with Azure Data Lake , Azure Synapse , or Azure Functions . Familiarity with Python or PySpark for custom data transformations. Understanding of CI/CD pipelines and DevOps for data workflows. Exposure to data governance , metadata management , or data catalog tools. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 5+ years of experience in data engineering roles using Azure and Snowflake. Strong problem-solving, communication, and collaboration skills.
Posted 2 months ago
7.0 - 12.0 years
30 - 45 Lacs
Noida, Pune, Gurugram
Hybrid
Role: Lead Data Engineer Experience: 7-12 years Must-Have: 7+ years of relevant experienceinData Engineeringand delivery. 7+ years of relevant work experience in Big Data Concepts. Worked on cloud implementations. Have experience in Snowflake, SQL, AWS (glue, EMR, S3, Aurora, RDS, AWS architecture) Good experience withAWS cloudand microservices AWS glue, S3, Python, and Pyspark. Good aptitude, strong problem-solving abilities, analytical skills, and ability to take ownership asappropriate. Should be able to do coding, debugging, performance tuning, and deploying the apps to the Production environment. Experience working in Agile Methodology Ability to learn and help the team learn new technologiesquickly. Excellentcommunication and coordination skills Good to have: Have experience in DevOps tools (Jenkins, GIT etc.) and practices, continuous integration, and delivery (CI/CD) pipelines. Spark, Python, SQL (Exposure to Snowflake), Big Data Concepts, AWS Glue. Worked on cloud implementations (migration, development, etc. Role & Responsibilities: Be accountable for the delivery of the project within the defined timelines with good quality. Working with the clients and Offshore leads to understanding requirements, coming up with high-level designs, and completingdevelopment,and unit testing activities. Keep all the stakeholders updated about the task status/risks/issues if there are any. Keep all the stakeholders updated about the project status/risks/issues if there are any. Work closely with the management wherever and whenever required, to ensure smooth execution and delivery of the project. Guide the team technically and give the team directions on how to plan, design, implement, and deliver the projects. Education: BE/B.Tech from a reputed institute.
Posted 2 months ago
5.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Remote
Greetings from tsworks Technologies India Pvt We are hiring for Sr. Data Engineer / Lead Data Engineer, if you are interested please share your CV to mohan.kumar@tsworks.io About This Role tsworks Technologies India Private Limited is seeking driven and motivated Senior Data Engineers to join its Digital Services Team. You will get hands-on experience with projects employing industry-leading technologies. This would initially be focused on the operational readiness and maintenance of existing applications and would transition into a build and maintenance role in the long run. Position: Senior Data Engineer / Lead Data Engineer Experience : 5 to 11 Years Location : Bangalore, India / Remote Mandatory Required Qualification Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Expertise in DevOps and CI/CD implementation Excellent Communication Skills Skills & Knowledge Bachelor's or masters degree in computer science, Engineering, or a related field. 5 to 10 Years of experience in Information Technology, designing, developing and executing solutions. 3+ Years of hands-on experience in designing and executing data solutions on Azure cloud platforms as a Data Engineer. Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Familiarity with Snowflake data platform is a good to have experience. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Experience with data modelling concepts and practices. Familiarity with data quality, governance, and security best practices. Knowledge of big data technologies such as Hadoop, Spark, or Kafka. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. Public cloud certifications are desired.
Posted 2 months ago
8.0 - 12.0 years
25 - 35 Lacs
Hyderabad
Hybrid
Job description Job Summary: We are seeking a skilled and experienced Data Engineer with expertise in Oracle Data Integrator (ODI) and Oracle Business Intelligence (OBI) to join our dynamic team. The ideal candidate will play a crucial role in designing, developing, and maintaining data integration and business intelligence solutions. Responsibilities: Collaborate with cross-functional teams to understand data requirements and implement effective data integration solutions using Oracle Data Integrator (ODI). Develop and optimize ETL processes to extract, transform, and load data from various sources into the data warehouse. Design and implement data models to support business intelligence and reporting needs using Oracle Business Intelligence (OBI). Ensure the reliability, scalability, and performance of data engineering solutions in a production environment. Troubleshoot and resolve data-related issues, ensuring data quality and integrity. Stay updated on industry trends and best practices in Oracle ODI/OBI and contribute to continuous improvement initiatives. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field with 5-8 years of experience in Data Engineering. Proven experience as a Data Engineer with a focus on Oracle ODI/OBI. Strong proficiency in Oracle Data Integrator (ODI) for ETL processes. Hands-on experience with Oracle Business Intelligence (OBI) for designing and developing BI solutions. Solid understanding of data modeling concepts and techniques. Excellent SQL skills for data manipulation and analysis. Familiarity with data warehousing principles and best practices. Strong problem-solving and analytical skills. Effective communication and collaboration skills. Preferred Qualifications: Oracle ODI/OBI certifications. Experience with performance tuning and optimization of data integration processes. Knowledge of other data integration and BI tools.
Posted 2 months ago
8.0 - 10.0 years
25 - 27 Lacs
Bengaluru
Work from Office
Key Skills: Data Engineer, Data Integration, Informatica, Pyspark, Informatica MDM. Roles and Responsibilities: Utilize Informatica IDMC tools including Data Profiling, Data Quality, and Data Integration modules to support data initiatives. Implement and maintain robust Data Quality frameworks, ensuring data accuracy, consistency, and reliability. Work on ETL (Extract, Transform, Load) processes to support business intelligence and analytics needs. Participate in agile product teams to design, develop, and deliver data solutions aligned with business requirements. Perform data validation, cleansing, and profiling to meet data governance standards. Collaborate with cross-functional teams to understand business needs and translate them into technical solutions. Assist in the design and execution of QA processes to maintain high standards in data pipelines. Support integration with cloud platforms such as MS Azure and utilize DevOps tools for deployment. Contribute to innovation and improvement initiatives including the development of new features with modern data tools like Databricks. Maintain clear documentation and ensure alignment with data governance and data management principles. Optionally, develop visualizations using Power BI for data storytelling and reporting. Experience Requirement: 8-10 years of experience working on Data Quality projects. At least 3 years of hands-on experience with Informatica Data Quality modules. Strong understanding of data profiling, validation, cleansing, and overall data quality concepts. Experience with ETL processes and QA/testing frameworks. Basic knowledge of Microsoft Azure platform and services. Exposure to Data Governance and Management practices. Experience in agile environments using tools like DevOps. Strong analytical, problem-solving, and troubleshooting skills. Proficient in English with excellent communication and collaboration abilities. Nice to have: experience developing with Power BI. Education: Any Graduation.
Posted 2 months ago
6.0 - 8.0 years
25 - 30 Lacs
Bengaluru
Work from Office
6+ years of experience in information technology, Minimum of 3-5 years of experience in managing and administering Hadoop/Cloudera environments. Cloudera CDP (Cloudera Data Platform), Cloudera Manager, and related tools. Hadoop ecosystem components (HDFS, YARN, Hive, HBase, Spark, Impala, etc.). Linux system administration with experience with scripting languages (Python, Bash, etc.) and configuration management tools (Ansible, Puppet, etc.) Tools like Kerberos, Ranger, Sentry), Docker, Kubernetes, Jenkins Cloudera Certified Administrator for Apache Hadoop (CCAH) or similar certification. Cluster Management, Optimization, Best practice implementation, collaboration and support.
Posted 2 months ago
5.0 - 10.0 years
30 - 35 Lacs
Chennai, Bengaluru
Work from Office
Data Engineer: Experienced Kstream + Ksql dev with in-depth knowledge of specific client systems TAHI Contract and Application, ISP Contract and Application modules. Performs data analysis and writes code to implement functional requirements per LLD and client processes. Minimum skills levels in this specific area Current roles are 5 + years plus Insurnace domain experience These are technical roles, and the prime requirement is for Kstream/ Java/ KSLQDB/ Kafka
Posted 2 months ago
5.0 - 10.0 years
20 - 30 Lacs
Pune
Work from Office
Role & responsibilities Update from Client - Ideally, we are looking for a 60:40 mix, with stronger capabilities on the Data Engineering side, along with working knowledge of Machine Learning and Data Science conceptsespecially those who can pick up tasks in Agentic AI, OpenAI, and related areas as required in the future.
Posted 2 months ago
5.0 - 10.0 years
15 - 30 Lacs
Chennai
Work from Office
Key Skills: Azure Devops, Data Engineer, Azure Databricks, Azure Roles and Responsibilities: Design and develop scalable data pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and Azure Data Lake. Build robust ETL/ELT processes to ingest data from structured and unstructured sources. Implement data models and manage large-scale data warehouses and lakes. Optimize data processing workloads for performance and cost-efficiency. Work closely with Data Scientists, Analysts, and Software Engineers to meet data needs. Ensure data governance, quality, security, and compliance best practices. Monitor, troubleshoot, and enhance data workflows and environments. Skills Required: Proven experience as a Data Engineer working in Azure environments. Strong expertise in Azure Data Factory, Synapse, Databricks, Azure SQL, and Data Lake. Proficient in SQL, Python, and PySpark. Solid understanding of ETL/ELT pipelines, data integration, and data modeling. Experience with CI/CD, version control (Git), and automation tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., ARM, Bicep, Terraform). Excellent problem-solving, communication, and collaboration skills. Education: Bachelor's Degree in related field
Posted 2 months ago
8.0 - 13.0 years
15 - 30 Lacs
Pune
Work from Office
Hi, Greetings from Peoplefy Infosolutions !!! We are hiring for one of our reputed MNC client based in Pune . We are looking for candidates with 8+ years of experience working as a Data Engineer. Job Description: Design, develop, implement, test, and maintain scalable and efficient data pipelines for large scale structured and unstructured datasets, including document, image, and event data used in GenAI and ML use cases. Collaborate closely with data scientists, AI/ML engineers, MLOps and Product Owners to understand data requirements and ensure data availability and quality. Build and optimize data architectures for both batch and real-time processing. Develop and maintain data warehouses and data lakes to store and manage large volumes of structured and unstructured data. Implement data validation and monitoring processes to ensure data integrity. Implement and manage vector databases (eg. pgVector, Pinecone, FAISS, etc) and embedding pipelines to support retrieval-augmented architectures. Support data sourcing and ingestion strategies, including API, data lakes, and message queues. Enforce data quality, lineage, observability, and governance standards for AI workloads Work with cross-functional IT and business teams in an Agile environment to deliver successful data solutions. Help foster a data-driven culture via information sharing, design for scalability, and operational efficiency. Stay updated with the latest trends and best practices in data engineering and big data technologies Interested candidates for above position kindly share your CVs on sneh.ne@peoplefy.com with below details - Experience : CTC : Expected CTC : Notice Period : Location :
Posted 2 months ago
10.0 - 15.0 years
55 - 60 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Position Overview We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities Data Architecture & Modeling Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) Create and maintain data lineage documentation and data dictionaries for healthcare datasets Establish data modeling standards and best practices across the organization Technical Leadership Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica Architect scalable data solutions that handle large volumes of healthcare transactional data Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) Design data models that support analytical, reporting and AI/ML needs Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality Implement data governance frameworks specific to healthcare data privacy and security requirements Establish data quality monitoring and validation processes for critical health plan metrics Lead eUorts to standardize healthcare data definitions across multiple systems and data sources. Required Qualifications Technical Skills 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data Expert-level proficiency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) Proficiency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data Experience with healthcare data standards and medical coding systems Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication Proven track record of leading data modeling projects in complex healthcare environments Strong analytical and problem-solving skills with ability to work with ambiguous requirements Excellent communication skills with ability to explain technical concepts to business stakeholders Experience mentoring team members and establishing technical standards. Preferred Qualifications Experience with Medicare Advantage, Medicaid, or Commercial health plan operations Cloud platform certifications (AWS, Azure, or GCP) Experience with real-time data streaming and modern data lake architectures Knowledge of machine learning applications in healthcare analytics Previous experience in a lead or architect role within healthcare organizations. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 2 months ago
3.0 - 8.0 years
11 - 21 Lacs
Hyderabad
Work from Office
About Position: We are conducting an in-person hiring drive on 28th june 2025, for Azure Data Engineer in Hyderabad. In Person Drive Location: Persistent Systems (6th Floor), Gate 11, SALARPURIA SATTVA ARGUS, SALARPURIA SATTVA KNOWLEDGE CITY, beside T hub, Shilpa Gram Craft Village, Madhapur, Rai Durg, Hyderabad, Telangana 500081 We are hiring Azure Data Engineer with skills in Azure Databricks, Azure DataFactory, Pyspark, SQL. Role: Azure Data Engineer Location: Hyderabad Experience: 3-8 Years Job Type: Full Time Employment What You'll Do: Design and implement robust ETL/ELT pipelines using PySpark on Databricks. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements. Optimize data workflows for performance and scalability. Manage and monitor data pipelines in production environments. Ensure data quality, integrity, and security across all stages of data processing. Integrate data from various sources including APIs, databases, and cloud storage. Develop reusable components and frameworks for data processing. Document technical solutions and maintain code repositories. Expertise You'll Bring: Bachelors or Masters degree in Computer Science, Engineering, or related field. 2+ years of experience in data engineering or software development. Strong proficiency in PySpark and Apache Spark. Hands-on experience with Databricks platform. Proficiency in SQL and working with relational databases. Experience with cloud platforms (Azure, AWS, or GCP). Familiarity with Delta Lake, MLflow, and other Databricks ecosystem tools. Strong problem-solving and communication skills. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry's best Let's unleash your full potential at Persistent "Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind."
Posted 2 months ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad
Work from Office
We are seeking a highly experienced and self-driven Senior Data Engineer to design, build, and optimize modern data pipelines and infrastructure. This role requires deep expertise in Snowflake, DBT, Python, and cloud data ecosystems. You will play a critical role in enabling data-driven decision-making across the organization by ensuring the availability, quality, and integrity of data. Key Responsibilities: Design and implement robust, scalable, and efficient data pipelines using ETL/ELT frameworks. Develop and manage data models and data warehouse architecture within Snowflake . Create and maintain DBT models for transformation, lineage tracking, and documentation. Write modular, reusable, and optimized Python scripts for data ingestion, transformation, and automation. Collaborate closely with data analysts, data scientists, and business teams to gather and fulfill data requirements. Ensure data integrity, consistency, and governance across all stages of the data lifecycle. Monitor pipeline performance and implement optimization strategies for queries and storage. Follow best practices for data engineering including version control (Git), testing, and CI/CD integration. Required Skills and Qualifications: 8+ years of experience in Data Engineering or related roles. Deep expertise in Snowflake : schema design, performance tuning, security, and access controls. Proficiency in Python , particularly for scripting, data transformation, and workflow automation. Strong understanding of data modeling techniques (e.g., star/snowflake schema, normalization). Proven experience with DBT for building modular, tested, and documented data pipelines. Familiarity with ETL/ELT tools and orchestration platforms like Apache Airflow or Prefect . Advanced SQL skills with experience handling large and complex data sets. Exposure to cloud platforms such as AWS , Azure , or GCP and their data services. Preferred Qualifications: Experience implementing data quality checks and governance frameworks. Understanding of modern data stack and CI/CD pipelines for data workflows. Contributions to data engineering best practices, open-source projects, or thought leadership.
Posted 2 months ago
8.0 - 12.0 years
12 - 22 Lacs
Pune
Work from Office
Role & responsibilities 8+ years with lead exp 2 open positions - Data Engineer The relevant experience for data engineer should be 5+ in data engineering. Data Engineer required skillsets - SQL, Python, ETL, Pyspark, Data Engineer, Databricks. Excellent Comm Skills (No Compromise), Focus is SQL- 90% strong exposure Location- Pune Notice Period- 0 to 15 Days Joiners Only Thanks & Regards Sushma Patil HR Cordinator sushma.patil@in.experis.com
Posted 2 months ago
3.0 - 8.0 years
5 - 11 Lacs
Pune, Mumbai (All Areas)
Hybrid
Overview: TresVista is looking to hire an Associate in its Data Intelligence Group team, who will be primarily responsible for managing clients as well as monitor/execute projects both for the clients as well as internal teams. The Associate may be directly managing a team of up to 3-4 Data Engineers & Analysts across multiple data engineering efforts for our clients with varied technologies. They would be joining the current team of 70+ members, which is a mix of Data Engineers, Data Visualization Experts, and Data Scientists. Roles and Responsibilities: Interacting with the client (internal or external) to understand their problems and work on solutions that address their needs Driving projects and working closely with a team of individuals to ensure proper requirements are identified, useful user stories are created, and work is planned logically and efficiently to deliver solutions that support changing business requirements Managing the various activities within the team, strategizing how to approach tasks, creating timelines and goals, distributing information/tasks to the various team members Conducting meetings, documenting, and communicating findings effectively to clients, management and cross-functional teams Creating Ad-hoc reports for multiple internal requests across departments Automating the process using data transformation tools Prerequisites Strong analytical, problem-solving, interpersonal, and communication skills Advanced knowledge of DBMS, Data Modelling along with advanced querying capabilities using SQL Working experience in cloud technologies (GCP/ AWS/Azure/Snowflake) Prior experience in building and deploying ETL/ELT pipelines using CI/CD, and orchestration tools such as Apache Airflow, GCP workflows, etc. Proficiency in Python for building ETL/ELT processes and data modeling Proficiency in Reporting and Dashboards creation using Power BI/Tableau Knowledge in building ML models and leveraging Gen AI for modern architectures. Experience working with version control platforms like GitHub Familiarity with IaC tools like Terraform and Ansible is good to have Stakeholder Management and client communication experience would be preferred Experience in the Financial Services domain will be an added plus Experience in Machine Learning tools and techniques will be good to have Experience 3-7 years Education BTech/MTech/BE/ME/MBA in Analytics Compensation The compensation structure will be as per industry standards
Posted 2 months ago
8.0 - 13.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Role: Senior Data Engineer Location: Bangalore - Hybrid Experience : 10+ Years Job Requirements: ETL & Data Pipelines: Experience building and maintaining ETL pipelines with large data sets using AWS Glue, EMR, Kinesis, Kafka, CloudWatch Programming & Data Processing: Strong Python development experience with proficiency in Spark or PySpark Experience in using APIs Database Management: Strong skills in writing SQL queries and performance tuning in AWS Redshift Proficient with other industry-leading RDBMS such as MS SQL Server and PostgreSQL AWS Services: Proficient in working with AWS services including AWS Lambda, Event Bridge, Step Functions, SNS, SQS, S3, and MI models Interested candidates can share their resume at Neesha1@damcogroup.com
Posted 2 months ago
5.0 - 8.0 years
20 - 30 Lacs
Bengaluru
Work from Office
Job Description: Skill/ Tech Stack: Data Engineer Location: Bangalore Experience: 5 to 8 years Work from the office in a Hybrid mode (Thrice a week). Job Overview: The ideal candidate will: Work with the team to define high-level technical requirements and architecture for the back-end services , Data components, data monetization components Develop new application features and enhance existing ones Develop relevant documentation and diagrams. Work with other teams for deployment, testing, training, and production support. Integration with Data Engineering teams Ensure that development, coding, privacy, and security standards are adhered to Write clean and quality code. Ready to work on new technologies as business demands Strong communication skills and work ethics. Core/Must have skills: Out of total years of experience, minimum 5+ years of professional experience in Python development, with a focus on data-intensive applications. Proven experience with Apache Spark and PySpark for large-scale data processing. Solid understanding of SQL and experience working with relational databases (e.g., Oracle, sparkSQL) and query optimization. Experience in SDLC, particularly in applying software development best practices and methodologies. Experience in creating and maintaining unit tests, integration tests, and performance testing for data pipelines and systems. Experience with big data platform Databricks. Experience in building data intensive applications, data products and good understanding of data pipeline (Feature Data Engineering ,Data Transformation, Data Lineage, Data Quality) Experience with cloud platforms such as AWS for data infrastructure and services is preferred. This is a hands-on developer positions within a small elite development team that moves very fast Role will evolve as tech leadership for Data Initiative Good to have skills: Knowledge of FX business / capital market domain is a plus Knowledge of data formats like AVRO, Parquet, and working with complex data types. Experience with Apache Kafka for real-time data streaming and Kafka Streams for processing data streams. Experience with Airflow for orchestrating complex data workflows and pipelines. Expertise or interest in Linux Exposure to data governance and security best practices in data management.
Posted 2 months ago
7.0 - 10.0 years
14 - 24 Lacs
Chennai
Hybrid
Key Skills: Database, Data Engineer, MS SQL Server, MySQL, Database Design. Roles and Responsibilities: Database Design & Architecture: Design, develop, and maintain complex SQL Server databases. Define and implement efficient database models, schemas, and indexing strategies based on business requirements. Performance Tuning & Optimization: Analyze and optimize SQL queries, stored procedures, and indexing strategies. Monitor database performance using tools like SQL Profiler, Extended Events, and DMVs. Implement performance enhancements such as partitioning, caching, and execution plan optimization. Database Deployment and Integration: Build and deploy database systems for new applications. Ensure seamless integration with front-end and back-end systems. Collaborate with developers to implement APIs for database access. Database Maintenance and Monitoring: Monitor database systems for performance, uptime, and availability. Address incidents and alerts related to database health. Perform routine maintenance tasks, including backups and recovery testing. Collaboration & Support: Work closely with application developers to optimize database interactions. Provide production support, troubleshoot database issues, and participate in on-call rotations. Document database processes, architecture, and troubleshooting guidelines. Database Administration & Maintenance: Install, configure, and upgrade SQL Server instances in both on-premises and cloud environments. Manage database security, user access controls, and compliance policies including RBAC, encryption, and auditing. Develop and enforce database backup, recovery, and retention policies. Experience Requirement: 7+ years of experience as a Database Engineer or Administrator specializing in SQL Server. Expertise in SQL Server 2016/2019/2022, including T-SQL and advanced query optimization. Strong knowledge of indexing, partitioning, and performance tuning techniques. Experience with PowerShell, T-SQL scripting, or other automation tools. Familiarity with CI/CD pipelines for database deployments (e.g., Redgate). Hands-on experience with high-availability (HA) and disaster recovery (DR) solutions. Strong analytical and problem-solving skills. Excellent communication and documentation abilities. Experience with cloud-based SQL Server solutions (AWS RDS) is preferred. Familiarity with NoSQL databases such as MongoDB and Redis is a plus. Education: B.Tech M.Tech (Dual), B.E., B.Tech.
Posted 2 months ago
3.0 - 8.0 years
12 - 19 Lacs
Pune
Hybrid
This is Only for Pune Local Candidates ( Not for Relocation Candidates) Role : Data Engineer This is C2H Role Experience : 3- 8 yrs Location : Kharadi , Pune Excellent Communication SKills NP: Immediate joiner to 1 m Primary Skills Python, document intelligence, NLP, unstructured data extraction (desirable to have OpenAI and prompt engineering) Secondary Skills Azure infra experiences and data bricks Mandatory Skills Data Infrastructure & Engineering Designing, building, productionizing, and maintaining scalable and reliable data infrastructure and data products. Experience with data modeling, pipeline idempotency, and operational observability 2.Programming Languages: Proficiency in one or more object-oriented programming languages such as: Python Scala Java C# 3.Database Technology : Strong experience with: SQL and NoSQL databases Query structures and design best practices Scalability, readability, and reliability in database design 4.Distributed Systems Experience implementing large-scale distributed systems in collaboration with senior team members. 5. . Software Engineering Best Practices Technical design and reviews Unit testing, monitoring, and alerting Code versioning, code reviews, and documentation CI/CD pipeline development and maintenance 6.Security & Compliance Deploying secure and well-tested software and data assets Meeting privacy and compliance requirement 7.Site Reliability Engineering Service reliability, on-call rotations, defining and maintaining SLAs Infrastructure as code and containerized deployments Job Description : Able to enrich data by data transformation and joining with other datasets. Able to analyze data and derive statistical insights. Able to convey story through data visualization. Ability to build Data pipelines for diverse interfaces. Good understating of API workflow. Technical Skills : AWS Data Lake and AWS data hub and AWS cloud platform. Interested Candidate Share Resume at dipti.bhaisare@in.experis.com
Posted 2 months ago
3.0 - 6.0 years
20 - 30 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer II (Python, SQL) Experience: 3 to 6 years Location: Bangalore, Karnataka (Work from office, 5 days a week) Role: Data Engineer II (Python, SQL) As a Data Engineer II, you will work on designing, building, and maintaining scalable data pipelines. Youll collaborate across data analytics, marketing, data science, and product teams to drive insights and AI/ML integration using robust and efficient data infrastructure. Key Responsibilities: Design, develop and maintain end-to-end data pipelines (ETL/ELT). Ingest, clean, transform, and curate data for analytics and ML usage. Work with orchestration tools like Airflow to schedule and manage workflows. Implement data extraction using batch, CDC, and real-time tools (e.g., Debezium, Kafka Connect). Build data models and enable real-time and batch processing using Spark and AWS services. Collaborate with DevOps and architects for system scalability and performance. Optimize Redshift-based data solutions for performance and reliability. Must-Have Skills & Experience: 3+ years in Data Engineering or Data Science with strong ETL and pipeline experience. Expertise in Python and SQL . Strong experience in Data Warehousing , Data Lakes , Data Modeling , and Ingestion . Working knowledge of Airflow or similar orchestration tools. Hands-on with data extraction techniques like CDC , batch-based, using Debezium, Kafka Connect, AWS DMS . Experience with AWS Services : Glue, Redshift, Lambda, EMR, Athena, MWAA, SQS, etc. Knowledge of Spark or similar distributed systems. Experience with queuing/messaging systems like SQS , Kinesis , RabbitMQ .
Posted 2 months ago
7.0 - 12.0 years
25 - 40 Lacs
Gurugram
Remote
Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime YoE: 7 to 10 years relevant experience Shift: 6.30pm to 2.30am IST Job Purpose: The Senior Data Engineer designs, builds, and maintains scalable data pipelines and architectures to support the Denials AI workflow under the guidance of the Team Lead, Data Management. This role ensures data is reliable, compliant with HIPAA, and optimized. Duties & Responsibilities: Collaborate with the Team Lead and crossfunctional teams to gather and refine data requirements for Denials AI solutions. Design, implement, and optimize ETL/ELT pipelines using Python, Dagster, DBT, and AWS data services (Athena, Glue, SQS). Develop and maintain data models in PostgreSQL; write efficient SQL for querying and performance tuning. Monitor pipeline health and performance; troubleshoot data incidents and implement preventive measures. Enforce data quality and governance standards, including HIPAA compliance for PHI handling. Conduct code reviews, share best practices, and mentor junior data engineers. Automate deployment and monitoring tasks using infrastructure-as-code and AWS CloudWatch metrics and alarms. Document data workflows, schemas, and operational runbooks to support team knowledge transfer. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, or related field. 5+ years of handson experience building and operating productiongrade data pipelines. Solid experience with workflow orchestration tools (Dagster) and transformation frameworks (DBT) or other similar tools such (Microsoft SSIS, AWS Glue, Air Flow). Strong SQL skills on PostgreSQL for data modeling and query optimization or any other similar technologies (Microsoft SQL Server, Oracle, AWS RDS). Working knowledge with AWS data services: Athena, Glue, SQS, SNS, IAM, and CloudWatch. Basic proficiency in Python and Python data frameworks (Pandas, PySpark). Experience with version control (GitHub) and CI/CD for data projects. Familiarity with healthcare data standards and HIPAA compliance. Excellent problemsolving skills, attention to detail, and ability to work independently. Strong communication skills, with experience mentoring or leading small technical efforts.
Posted 2 months ago
5.0 - 10.0 years
18 - 30 Lacs
Pune, Bengaluru
Hybrid
Job role & responsibilities:- Understanding operational needs by collaborating with specialized teams Supporting business operations.This involves architecture designing, building and deploying data systems, pipelines etc Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Build ETL and data movement solutions. Migrate data from traditional database systems to Cloud environment Technical Skills, Experience & Qualification required:- Experience in Cloud Data Engineering. Proficient in using Informatica PowerExchange for data integration and real-time data capture from various sources including databases, applications, and cloud environments. Hands on experience in Informatica PowerExchange Bachelors Degree in Computer Science or related field Proficient in Cloud Services Azure Strong hands-on experience for working with Streaming dataset Ability to integrate data from heterogeneous sources such as relational databases, NoSQL databases, and flat files using PowerExchange. Knowledge of CDC (Change Data Capture) methodologies and implementation using PowerExchange for real-time data updates. Enthusiasm for staying updated with the latest trends in data engineering and Informatica technologies. Willingness to participate in training and certification programs related to Informatica and data engineering. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Immediate joiners will be preferred only
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |