Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 6.0 years
10 - 13 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Role & responsibilities Data tester (functional testing) Automation experience preferably in Python/Pyspark Working knowledge of No-SQL preferably MongoDb (or json format) Working knowledge of AWS S3 Working knowledge of JIRA /confluence and defect management Understands Agile ways of working Years of experience : minimum 3 years Preferred candidate profile
Posted 2 months ago
7.0 - 12.0 years
25 - 40 Lacs
Bengaluru
Hybrid
Role - Data Engineer Experience - 7+ Years Notice - Immediate Skills - AWS (S3, Glue, Lambda, EC2), Spark, Pyspark, Python, Airflow
Posted 2 months ago
6.0 - 8.0 years
5 - 6 Lacs
Navi Mumbai, SBI Belapur
Work from Office
6-8 Years Relevant Years of Experience : 6-8 Years Mandatory Skills : Oracle Golden Gate SME Detailed JD : 1.Oracle GoldenGate Architecture & Implementation Design and implement high-availability and fault-tolerant GoldenGate solutions across multiple environments (on-prem, cloud, hybrid).Install, configure, and optimize Oracle GoldenGate 21c/23ai for heterogeneous databases (Oracle, MSSQL, MySQL, PostgreSQL).Set up Extract, Pump, and Replicat processes for source-to-target data replication.Implement downstream mining for redo log-based replication in Active Data Guard environments.Configure GoldenGate for Big Data to integrate with platforms like Kafka, and AWS S3. 2. Performance Tuning & Optimization Fine-tune GoldenGate Extract, Pump, and Replicat processes to handle high-transaction loads (~20TB logs/day).Optimize ACFS, XAG, and RAC configurations for high availability.Implement multi-threaded Replicat (MTR) for parallel processing and improved performance.Configure compression techniques for efficient data transfer between hubs. 3. Monitoring & Troubleshooting Set up OEM GoldenGate Plugin for real-time monitoring of replication health and performance.Troubleshoot latency issues, data integrity errors, and replication lag. Monitor Kafka offsets to ensure efficient data consumption by downstream systems.Validate data integrity using Oracle Veridata or manual comparison techniques. 4. Cloud & Big Data Integration Implement Oracle GoldenGate for Big Data (OGG-BD) for streaming replication to Kafka and AWS S3.Design data lakehouse architectures for real-time data ingestion into cloud platforms.Configure Parquet and Avro file formats for efficient storage in AWS S3. 5. Security & Compliance Implement TLS encryption and secure log transport between source, downstream, and target systems.Ensure compliance with enterprise data governance policies.
Posted 2 months ago
0.0 - 3.0 years
3 - 7 Lacs
Thane
Hybrid
Responsibilities: Provide first-line and second-line technical support to customers via email, phone, or chat. Diagnose and resolve software issues, bugs, and technical queries efficiently and effectively. Create and maintain knowledge base articles, documentation, and FAQs for both internal and customer use. Assist with system monitoring and performance tuning to ensure software stability. Assist customers in product feature usage, configurations, and best practices. Provide training to end-users or internal teams on new features and functionalities. Log and track incidents in the support management system , ensuring that all issues are addressed promptly. Stay up-to-date with the latest software releases, patches, and updates.
Posted 2 months ago
7.0 - 9.0 years
27 - 30 Lacs
Bengaluru
Work from Office
We are seeking experienced Data Engineers with over 7 years of experience to join our team at Intuit. The selected candidates will be responsible for developing and maintaining scalable data pipelines, managing data warehousing solutions, and working with advanced cloud environments. The role requires strong technical proficiency and the ability to work onsite in Bangalore. Key Responsibilities: Design, build, and maintain data pipelines to ingest, process, and analyze large datasets using PySpark. Work on Data Warehouse and Data Lake solutions to manage structured and unstructured data. Develop and optimize complex SQL queries for data extraction and reporting. Leverage AWS cloud services such as S3, EC2, EMR, Athena, and Redshift for data storage, processing, and analytics. Collaborate with cross-functional teams to ensure the successful delivery of data solutions that meet business needs. Monitor data pipelines and troubleshoot any issues related to data integrity or system performance. Required Skills: 7+ years of experience in data engineering or related fields. In-depth knowledge of Data Warehouses and Data Lakes. Proven experience in building data pipelines using PySpark. Strong expertise in SQL for data manipulation and extraction. Familiarity with AWS cloud services, including S3, EC2, EMR, Athena, Redshift, and other cloud computing platforms. Preferred Skills: Python programming experience is a plus. Experience working in Agile environments with tools like JIRA and GitHub.
Posted 2 months ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Interested can also apply with Sanjeevan Natarajan - 94866 21923 sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma
Posted 2 months ago
5.0 - 10.0 years
40 - 45 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Notice: - Immediate Joiners Only Design, develop, and maintain SQL Server Analysis Services (SSAS) models Create and manage OLAP cubes to support business intelligence reporting Develop and implement multidimensional and tabular data models Optimize the performance of SSAS solutions for efficient query processing Integrate data from various sources into SQL Server databases and SSAS models Preferably knowledge on AWS S3 and SQL server Polybase Location - Bangalore, Pune, Gurgaon, Noida, Hyderabad
Posted 2 months ago
2.0 - 7.0 years
4 - 8 Lacs
Ahmedabad
Work from Office
Travel Designer Group Founded in 1999, Travel Designer Group has consistently achieved remarkable milestones in a relatively short span of time. While we embody the agility, growth mindset, and entrepreneurial energy typical of start-ups, we bring with us over 24 years of deep-rooted expertise in the travel trade industry. As a leading global travel wholesaler, we serve as a vital bridge connecting hotels, travel service providers, and an expansive network of travel agents worldwide. Our core strength lies in sourcing, curating, and distributing high-quality travel inventory through our award-winning B2B reservation platform, RezLive.com. This enables travel trade professionals to access real-time availability and competitive pricing to meet the diverse needs of travelers globally. Our expanding portfolio includes innovative products such as: * Rez.Tez * Affiliate.Travel * Designer Voyages * Designer Indya * RezRewards * RezVault With a presence in over 32+ countries and a growing team of 300+ professionals, we continue to redefine travel distribution through technology, innovation, and a partner-first approach. Website : https://www.traveldesignergroup.com/ Profile :- ETL Developer ETL Tools - any 1 -Talend / Apache NiFi /Pentaho/ AWS Glue / Azure Data Factory /Google Dataflow Workflow & Orchestration : any 1 good to have- not mandatory Apache Airflow/dbt (Data Build Tool)/Luigi/Dagster / Prefect / Control-M Programming & Scripting : SQL (Advanced) Python ( mandatory ) Bash/Shell (mandatory) Java or Scala (optional for Spark) -optional Databases & Data Warehousing MySQL / PostgreSQL / SQL Server / Oracle mandatory Snowflake - good to have Amazon Redshift - good to have Google BigQuery - good to have Azure Synapse Analytics - good to have MongoDB / Cassandra - good to have Cloud & Data Storage : any 1 -2 AWS S3 / Azure Blob Storage / Google Cloud Storage - mandatory Kafka / Kinesis / Pub/Sub Interested candidate also share your resume in shivani.p@rezlive.com
Posted 2 months ago
5.0 - 8.0 years
18 - 25 Lacs
Pune
Work from Office
We are seeking a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes. Responsibilities: Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness. Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines. Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS. Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs. Implement data quality checks and monitoring to ensure data integrity and identify potential issues. Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes. Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum and emerging technologies in data engineering. Contribute to the development and enhancement of our data warehouse architecture Requirements Mandatory: Bachelor's degree in Computer Science, Engineering, or a related field. 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes. At least 3+ years of exp in Snowflake data warehousing technologies. At least 3+ years of exp in creating and maintaining Airflow ETL pipelines. Minimum 3+ years of professional level experience with Python languages for data manipulation and automation. Working experience with Elastic Search and its application in data pipelines. Proficiency in SQL and experience with data modelling techniques. Strong understanding of cloud-based data storage solutions such as AWS S3. Experience working with NFS and other file storage systems. Excellent problem-solving and analytical skills. Strong communication and collaboration skills.
Posted 2 months ago
4.0 - 8.0 years
10 - 20 Lacs
Hyderabad, Chennai
Work from Office
Roles & Responsibilities : • We are looking for a strong Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . • Integration of data from multiple sources or vendors to provide the holistic insights from data. • You are expected to build and manage Data Lake and Data warehouse solutions, design data models, create ETL processes, implementing data quality mechanisms etc. • Perform EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. • Should have experience in client interaction oral and written. • Experience in mentoring juniors and providing required guidance to the team. Required Technical Skills • Extensive experience in languages such as Python, Pyspark, SQL (basics and advanced). • Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Data Architecture . • Must be proficient in Redshift, Azure Data Factory, Snowflake etc. • Hands-on experience in cloud services like AWS S3, Glue, Lambda, CloudWatch, Athena etc. • Good to have knowledge in Dataiku, Big Data Technologies and basic knowledge of BI tools like Power BI, Tableau etc will be plus. • Sound knowledge in Data management, data operations, data quality and data governance. • Knowledge of SFDC, Waterfall/ Agile methodology. • Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications • Bachelors or masters Engineering/ MCA or equivalent degree. • 4-6 years of relevant industry experience as Data Engineer . • Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales, Open Data etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Preferably Hyderabad/ Chennai, India
Posted 3 months ago
6.0 - 8.0 years
8 - 12 Lacs
Gurugram
Hybrid
Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop
Posted 3 months ago
6.0 - 8.0 years
8 - 12 Lacs
Hyderabad
Hybrid
Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop
Posted 3 months ago
6.0 - 11.0 years
17 - 30 Lacs
Kolkata, Hyderabad/Secunderabad, Bangalore/Bengaluru
Hybrid
Inviting applications for the role of Lead Consultant- Snowflake Data Engineer( Snowflake+Python+Cloud)! In this role, the Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be added an advantage Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, Bulk copy, Snowpipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight, Steamlit Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python or Pyspark. Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2. Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/PySpark, AWS/Azure, ETL concepts, & Data Warehousing concepts
Posted 3 months ago
6.0 - 11.0 years
17 - 30 Lacs
Kolkata, Hyderabad/Secunderabad, Bangalore/Bengaluru
Hybrid
Inviting applications for the role of Lead Consultant- Snowflake Data Engineer( Snowflake+Python+Cloud)! In this role, the Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be added an advantage Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, Bulk copy, Snowpipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight, Steamlit Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python or Pyspark. Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2. Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/PySpark, AWS/Azure, ETL concepts, & Data Warehousing concepts
Posted 3 months ago
4.0 - 9.0 years
22 - 27 Lacs
Bengaluru
Work from Office
We are looking for a skilled Veeam Backup Administrator to manage and maintain backup, replication, and disaster recovery solutions using Veeam Backup & Replication The ideal candidate should have hands-on experience configuring backup solutions across on-premise and cloud environments, with a focus on automation, reporting, and BCP/DR planning Key Responsibilities:Manage and configure Veeam Backup & Replication infrastructureSchedule and monitor backup, backup copy, and replication jobsSet up backup and copy jobs from on-prem to AWS S3Configure and manage Veeam ONE for performance monitoring and reportingAutomate and schedule reports for backup and replication job statusesConfigure Veeam Enterprise Manager for centralized backup administrationSet up tape backups within Veeam environmentImplement Immutable repositories for enhanced data securityConfigure storage snapshots in DD Boost and Unity storageDesign and execute BCP/DR strategies and perform server-level testing for recovery readiness Required Skills:Hands-on experience with Veeam Backup & ReplicationProficiency in Veeam ONE, Enterprise Manager, and tape backup configurationExperience in backup to cloud storage (AWS S3)Strong understanding of Immutable backups and snapshot technologyKnowledge of DD Boost, Unity storage, and storage replicationExperience in BCP/DR planning and executionGood troubleshooting and documentation skillsTechnical Key Skills: Veeam Backup, Replication, AWS S3, Veeam ONE, Enterprise Manager, Tape Backup, Immutable Backup, DD Boost, Unity Storage, BCP/DR
Posted 3 months ago
5.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Strong experience with Python, SQL, pySpark, AWS Glue. Good to have - Shell Scripting, Kafka Good knowledge of DevOps pipeline usage (Jenkins, Bitbucket, EKS, Lightspeed) Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Orchestration using Airflow Good to have - Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming Good debugging skills Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehouse architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Experience in Insurance domain preferred.
Posted 3 months ago
6.0 - 8.0 years
10 - 20 Lacs
Noida, Hyderabad, Pune
Work from Office
3-4 Years hands-on experience with Snowflake database Strong SQL, PL/SQL, and Snowflake functionality experience Strong exposure Oracle, SQL server, etc. Exposure to cloud storage services like AWS S3 2-3 years Informatica PowerCenter
Posted 3 months ago
4.0 - 8.0 years
5 - 15 Lacs
Pune
Hybrid
Databuzz is Hiring for Python developer-4+yrs-(Pune)-Hybrid Please mail your profile to haritha.jaddu@databuzzltd.com with the below details, If you are Interested. About DatabuzzLTD: Databuzz is One stop shop for data analytics specialized in Data Science, Big Data, Data Engineering, AI & ML, Cloud Infrastructure and Devops. We are an MNC based in both UK and INDIA. We are a ISO 27001 & GDPR complaint company. CTC - ECTC - Notice Period/LWD - (Candidate serving notice period will be preferred) Position: Python developer Location: Pune Exp -4+ yrs Mandatory skills : Candidate should have 4-8 years of Python web development experience Should have good knowledge of AWS serverless service like AWS Lambda, AWS S3 ,AWS Step Functions Should have good working experience on Flask, NumPy, pandas , json ,unit test , mongo ,sql Hands on experience in AWS cloud with understanding of various cloud services and offering for development and deployment of applications Good experience in Amazon RDS, MongoDB and PostgreSQL Should have good security fundamentals knowledge Should be able to code the AWS platform as a service using Terraform Regards, Haritha Talent Acquisition specialist haritha.jaddu@databuzzltd.com
Posted 3 months ago
4.0 - 9.0 years
12 - 22 Lacs
Hyderabad, Chennai
Work from Office
Interested can also apply with Sanjeevan Natarajan sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma
Posted 3 months ago
5.0 - 7.0 years
18 - 20 Lacs
Hyderabad, Bengaluru
Hybrid
Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with MUST 4+ YEARS hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark, Python, and working with modern data engineering tools in cloud environments such as AWS. Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments.
Posted 3 months ago
5.0 - 10.0 years
15 - 30 Lacs
Pune, Bengaluru, Delhi / NCR
Work from Office
Job Summary: We are seeking an experienced Informatica Developer with a strong background in data integration, cloud data platforms, and modern ETL tools. The ideal candidate will have hands-on expertise in Informatica Intelligent Cloud Services (IICS/CDI/IDMC) , Snowflake , and cloud storage platforms such as AWS S3. You will be responsible for building scalable data pipelines, designing integration solutions, and resolving complex data issues across cloud and on-premises environments. Key Responsibilities: Design, develop, and maintain robust data integration pipelines using Informatica PowerCenter and Informatica CDI/IDMC . Create and optimize mappings and workflows to load data into Snowflake , ensuring performance and accuracy. Develop and manage shell scripts to automate data processing and integration workflows. Implement data exchange processes between Snowflake and external systems, including AWS S3 . Write complex SQL and SnowSQL queries for data validation, transformation, and reporting. Collaborate with business and technical teams to gather requirements and deliver integration solutions. Troubleshoot and resolve performance, data quality, and integration issues in a timely manner. Work on integrations with third-party applications like Salesforce and NetSuite (preferred). Required Skills and Qualifications: 5+ years of hands-on experience in Informatica PowerCenter and Informatica CDI / IDMC . Minimum 3 - 4 years of experience with Snowflake database and SnowSQL commands. Strong SQL development skills. Solid experience with AWS S3 and understanding of cloud data integration architecture. Proficiency in Unix/Linux Shell Scripting . Ability to independently design and implement end-to-end ETL workflows. Strong problem-solving skills and attention to detail. Experience working in Agile/Scrum environments. Preferred Qualifications (Nice to Have): Experience integrating with Salesforce and/or NetSuite using Informatica. Knowledge of cloud platforms like AWS , Azure , or GCP . Informatica certification(s) or Snowflake certifications.
Posted 3 months ago
4.0 - 9.0 years
15 - 25 Lacs
Hyderabad, Chennai
Work from Office
Interested can also apply with sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma
Posted 3 months ago
3.0 - 6.0 years
5 - 15 Lacs
Bengaluru
Hybrid
Databuzz is Hiring for Python developer(NSO)-3+yrs-(Bangalore)-Hybrid Please mail your profile to haritha.jaddu@databuzzltd.com with the below details, If you are Interested. About DatabuzzLTD: Databuzz is One stop shop for data analytics specialized in Data Science, Big Data, Data Engineering, AI & ML, Cloud Infrastructure and Devops. We are an MNC based in both UK and INDIA. We are a ISO 27001 & GDPR complaint company. CTC - ECTC - Notice Period/LWD - (Candidate serving notice period will be preferred) Position: Python developer(NSO) Location: Bangalore Exp -3+ yrs Mandatory skills : Candidate should have 3-5 years of experience in core python, Django Should have experience in NSO Should have experience in AWS RDS, AWS S3 , Aws Step Functions Should have experience in Docker, Dynamo DB, Microservices Regards, Haritha Talent Acquisition specialist haritha.jaddu@databuzzltd.com
Posted 3 months ago
8.0 - 13.0 years
10 - 15 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role: NODE-AWS Developer Experience: 8-15 Years Location: Pan India Notice Period: 15 Days - Immediate Joiners Preferred Job Description We are seeking a highly skilled and experienced NODE-AWS Developer to join our team. The ideal candidate will have a strong background in web development, particularly with Node.js and extensive experience with AWS cloud services. Key Responsibilities: Designing, developing, and deploying enterprise-level, multi-tiered, and service-oriented applications using Node.js and TypeScript. Working extensively with AWS technologies including API Gateway, Lambda, RDS, S3, Step Functions, SNS, SQS, DynamoDB, Cloudwatch, and Cloudwatch Insights. Implementing serverless architectures and Infrastructure as Code (IaC) using AWS CDK or similar technologies. Applying strong knowledge of database design and data modeling principles for both relational and non-relational databases. Participating in code reviews, adhering to coding standards, and promoting a Shift Left mindset to ensure high code quality. Developing and enhancing unit tests using relevant frameworks. Collaborating effectively in a distributed and agile environment, utilizing tools like Jira and Bitbucket. Articulating architecture and design decisions clearly and comprehensively. Mandatory Skills: Node.js (minimum 4 years of dedicated experience) TypeScript, JavaScript AWS API Gateway AWS Lambda AWS SQS, AWS SNS AWS S3 DynamoDB AWS Cloudwatch, Cloudwatch Insights Serverless Architectures Microservices Docker Experience with unit test frameworks Database interactions (relational and non-relational) Experience with code reviews and coding standards
Posted 3 months ago
5.0 - 7.0 years
20 - 25 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
Responsibilities: Design and Development: Develop robust, scalable, and maintainable backend services using Python frameworks like Django, Flask, and FastAPI. Cloud Infrastructure: Work with AWS services (e.g., Cloudwatch, S3, RDS, Neptune, Lambda, ECS) to deploy, manage, and optimize our cloud infrastructure. Software Architecture:? Participate in defining and implementing software architecture best practices, including design patterns, coding standards, and testing methodologies. Database Management:? Proficiently work with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., DynamoDB, Neptune) to design and optimize data models and queries.? Experience with ORM tools. Automation: Design, develop, and maintain automation scripts (primarily in Python) for various tasks, including: Data updates and processing. Scheduling cron jobs. Integrating with communication platforms like Slack and Microsoft Teams for notifications and updates. Implementing business logic through automated scripts. Monitoring and Logging: Implement and manage monitoring and logging solutions using tools like ELK stack (Elasticsearch, Logstash, Kibana) and AWS CloudWatch. Production Support:? Participate in on-call rotations and provide support for production systems, troubleshooting issues and implementing fixes.? Proactively identify and address potential production issues. Team Leadership and Mentorship: Lead and mentor junior backend developers, providing technical guidance, code reviews, and support their professional growth. Required Skills and Experience: 5+ years of experience in backend software development. Strong proficiency in Python and at least two of the following frameworks: Django, Flask, FastAPI. Hands-on experience with AWS cloud services, including ECS. Experience with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., DynamoDB, Neptune). Strong experience with monitoring and logging tools, specifically ELK stack and AWS CloudWatch. Locations : Mumbai, Delhi NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Work Timings: 2:30PM-11:30PM(Monday-Friday)
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |