Mumbai, Maharashtra, India
Not disclosed
On-site
Contractual
Dear Professionals, Find below JD for Informatics IDMC to PySpark NOTE: DON'T APPLY IF YOU DON'T HAVE EXPERIENCE WITH PYSPARK. Job Summary : We are looking for a skilled Data Engineer with strong experience in Informatica IDMC and PySpark to lead the migration of ETL pipelines from Informatica IDMC to scalable, Spark-based solutions. The ideal candidate will have hands-on experience in cloud environments like AWS or Azure and a strong understanding of data warehousing, transformation, and orchestration. Key Responsibilities : Analyze existing ETL workflows built in Informatica IDMC and design equivalent pipelines in PySpark . Migrate and re-engineer data integration pipelines from Informatica Cloud to PySpark on distributed platforms. Optimize PySpark code for performance and cost efficiency in cloud data platforms (Databricks/EMR/HDInsight). Collaborate with data architects and business stakeholders to validate transformed data and maintain data quality. Ensure ETL processes are reliable, maintainable, and well-documented. Use CI/CD pipelines for deployment and version control of PySpark workflows. Work with large-scale structured and unstructured datasets from multiple sources. Automate data quality and monitoring mechanisms. Follow data governance and compliance policies. Required Skills : 5+ years of hands-on experience in Informatica Cloud (IDMC) and ETL development . Strong hands-on expertise in PySpark and Apache Spark for building scalable data processing pipelines. Good understanding of data lakes , data warehouses , and big data ecosystems . Experience working in cloud environments (AWS/Azure/GCP); experience with Databricks is a plus. Proficiency in SQL and performance tuning of queries. Knowledge of version control (Git) and CI/CD practices. Experience with job orchestration tools like Airflow , Control-M , or Azure Data Factory . Show more Show less
Mumbai, Maharashtra, India
Not disclosed
On-site
Contractual
Job Summary: We are seeking a skilled Informatica IDMC and PySpark to lead the migration of ETL pipelines from Informatica IDMC to scalable, Spark-based solutions. The ideal candidate will possess hands-on expertise in cloud environments like AWS or Azure , a solid understanding of data warehousing, transformation, and orchestration, and familiarity with modern ETL migration tools such as BladeBridge Converter , EZConvertETL by Wavicle , Qlik Replicate for Azure Databricks , and others. Key Responsibilities: Analyze existing ETL workflows built in Informatica IDMC and design equivalent, scalable pipelines in PySpark . Migrate and re-engineer data integration pipelines from Informatica Cloud to PySpark on distributed data platforms. Utilize advanced ETL migration and modernization tools such as BladeBridge Converter , EZConvertETL (Wavicle Data Solutions) , Qlik Replicate for Azure Databricks , Travinto Technologies ETL solutions , LeapLogic by Impetus , Next Pathway’s Shift , Wavicle ETL Converter , and Propel ETL by Xoriant to accelerate and streamline migration efforts. Optimize PySpark code for performance and cost efficiency in cloud data platforms such as Databricks , EMR , or HDInsight . Collaborate with data architects and business stakeholders to validate transformed data and maintain high data quality. Ensure all ETL processes are reliable, maintainable, and well-documented. Leverage CI/CD pipelines for deployment and version control of PySpark workflows. Work with large-scale structured and unstructured datasets from multiple sources. Automate data quality checks and monitoring mechanisms. Follow and enforce data governance and compliance policies. Required Skills: 5+ years of hands-on experience in Informatica Cloud (IDMC) and enterprise-grade ETL development. Strong hands-on expertise in PySpark and Apache Spark for building high-performance data pipelines. Deep understanding of data lakes , data warehouses , and modern big data ecosystems . Experience in cloud environments such as AWS , Azure , or GCP . Proficiency with Databricks is a strong advantage. Excellent skills in SQL and query performance tuning. Familiarity with Git , CI/CD tools , and modern software development practices. Experience with job orchestration tools like Apache Airflow , Control-M , or Azure Data Factory . Exposure to industry-leading ETL conversion platforms such as BladeBridge Converter , EZConvertETL , Qlik Replicate , LeapLogic , Next Pathway’s Shift , and Propel ETL is highly desirable. Insurance domain experience will be preferable. Show more Show less
India
Not disclosed
On-site
Part Time
Dear Professional, Don't apply if you don't have experience with Azure Fabric. Design and Implement Data Pipelines : Develop robust and scalable data pipelines leveraging Azure Fabric, Azure Data Factory, Azure Databricks, and Synapse Analytics. Medallion Architecture : Implement data solutions using the Medallion Architecture (Bronze, Silver, Gold data layers) to ensure data quality, consistency, and performance. Data Ingestion and Transformation : Design and implement data ingestion processes for structured and unstructured data from various sources into Azure Data Lake. Proficiency in Azure services: One lake, Azure Data Lake, Azure Data Factory, Azure Databricks, Synapse Analytics. Strong experience with Medallion Architecture and its implementation. Expertise in SQL, Python, and PySpark for data processing and analysis a must. Knowledge of data modeling, ETL/ELT processes, and data warehousing concepts. Show more Show less
Mumbai, Maharashtra, India
Not disclosed
On-site
Full Time
Job Title: Project Manager Lead - Experience Level - 15+ Years Location: Ghatkopar - Onsite Job Type: Full-Time Experience Level: Senior / Lead Department: IT / Cloud Infrastructure Job Summary: We are seeking an experienced and proactive Project Manager Lead to oversee and drive the successful execution of cloud-related projects, including large-scale migrations to AWS and Azure. The ideal candidate will demonstrate strong leadership capabilities, experience in team management, and deep knowledge of cloud infrastructure and migration strategies. Skills and Qualifications • Project management and leadership skills for managing projects and the teams involvedwith them. • Knowledge of various project management methodologies (e.g., agile/scrum) • Understanding of basic cloud computing concepts • Knowledge of Microsoft Azure and Experience working with Azure DevOps • Proven leadership skills • Cost and risk management skills • Excellent communication, interpersonal, and negotiation skills • Interpersonal skills • Ability to make important decisions under pressure Key Responsibilities: Lead end-to-end project delivery for cloud initiatives, including planning, execution, monitoring, and closure. Manage cross-functional teams and ensure alignment with project goals, timelines, and budget. Collaborate with technical architects and engineers on designing cloud-based solutions and migration strategies. Oversee migration activities from on-premise or hybrid environments to AWS and/or Azure. Ensure compliance with cloud governance, security, and best practices. Track project progress using agile or hybrid methodologies and provide regular updates to stakeholders. Identify and mitigate project risks, issues, and dependencies. Promote a culture of continuous improvement and accountability within the team. Email- SauravS@cloud9infosystems.co.in Contact - 9892325101 Show more Show less
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.