Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
20 - 35 Lacs
Pune, Gurugram
Work from Office
In one sentence We are seeking a skilled Database Migration Specialist with deep expertise in mainframe modernization and data migration to cloud platforms such as AWS, Azure, or GCP. The ideal candidate will have hands-on experience migrating legacy systems (COBOL, DB2, IMS, VSAM, etc.) to modern cloud-native databases like PostgreSQL, Oracle, or NoSQL. What will your job look like? Lead and execute end-to-end mainframe-to-cloud database migration projects. Analyze legacy systems (z/OS, Unisys) and design modern data architectures. Extract, transform, and load (ETL) complex datasets ensuring data integrity and taxonomy alignment. Collaborate with cloud architects and application teams to ensure seamless integration. Optimize performance and scalability of migrated databases. Document migration processes, tools, and best practices. Required Skills & Experience 5+ years in mainframe systems (COBOL, CICS, DB2, IMS, JCL, VSAM, Datacom). Proven experience in cloud migration (AWS DMS, Azure Data Factory, GCP Dataflow, etc.). Strong knowledge of ETL tools, data modeling, and schema conversion. Experience with PostgreSQL, Oracle, or other cloud-native databases. Familiarity with data governance, security, and compliance in cloud environments. Excellent problem-solving and communication skills.
Posted 2 days ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Remote
Job Description Job Title: Offshore Data Engineer Base Location: Bangalore Work Mode: Remote Experience: 5+ Years Job Description: We are looking for a skilled Offshore Data Engineer with strong experience in Python, SQL, and Apache Beam . Familiarity with Java is a plus. The ideal candidate should be self-driven, collaborative, and able to work in a fast-paced environment . Key Responsibilities: Design and implement reusable, scalable ETL frameworks using Apache Beam and GCP Dataflow. Develop robust data ingestion and transformation pipelines using Python and SQL . Integrate Kafka for real-time data streams alongside batch workloads. Optimize pipeline performance and manage costs within GCP services. Work closely with data analysts, data architects, and product teams to gather and understand data requirements. Manage and monitor BigQuery datasets, tables, and partitioning strategies. Implement error handling, resiliency, and observability mechanisms across pipeline components. Collaborate with DevOps teams to enable automated delivery (CI/CD) for data pipeline components. Required Skills: 5+ years of hands-on experience in Data Engineering or Software Engineering . Proficiency in Python and SQL . Good understanding of Java (for reading or modifying codebases). Experience building ETL pipelines with Apache Beam and Google Cloud Dataflow . Hands-on experience with Apache Kafka for stream processing. Solid understanding of BigQuery and data modeling on GCP. Experience with GCP services (Cloud Storage, Pub/Sub, Cloud Compose, etc.). Good to Have: Experience building reusable ETL libraries or framework components. Knowledge of data governance, data quality checks, and pipeline observability. Familiarity with Apache Airflow or Cloud Composer for orchestration. Exposure to CI/CD practices in a cloud-native environment (Docker, Terraform, etc.). Tech stack : Python, SQL, Java, GCP (BigQuery, Pub/Sub, Cloud Storage, Cloud Compose, Dataflow), Apache Beam, Apache Kafka, Apache Airflow, CI/CD (Docker, Terraform)
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.