Home
Jobs

2394 Informatica Jobs - Page 27

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 4.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

The Opportunity Nutanix is a global leader in cloud software and a pioneer in hyper-converged infrastructure solutions, making clouds invisible and freeing customers to focus on their business outcomes. Organizations worldwide use Nutanix software to leverage a single platform to manage any app at any location for their hybrid multi-cloud environments About the Team At Nutanix, the Data Science team is a dynamic and diverse group of 50 talented professionals spread across our offices in India (Bangalore and Pune) and San Jose. We pride ourselves on fostering a collaborative and supportive environment where innovation thrives. Our team is deeply committed to leveraging data in a results-oriented manner, ensuring our solutions remain customer-centric. We believe in transparency and trust, which allows for open communication and a fluid exchange of ideas. Being agile and adaptable, we embrace diverse perspectives to drive creativity and efficient problem-solving. You will report to the Sr. Manager, Data Engineering, who champions a culture of growth and mentorship within the team. Our work setup is hybrid, allowing flexibility with a requirement to be in the office 2-3 days a week, offering you the opportunity to connect with your colleagues while also enjoying the comforts of remote work. In this role, you will not have any travel requirements, allowing you to focus on your projects and collaboration efforts without the need for frequent travel. Your Role Should be at least around 3-4 years of experience in MDM development and implementation Should have completed at least 2 full life cycle experiences in MDM/DG projects using any tools like Informatica Strong on SQL and experience in Informatica MDM/DG. Must have hands-on experience across the various phases of a typical DQM solution - data profiling, data integration, validation, cleansing, standardization, matching, consolidation etc. Should have an understanding and experience of software development best practices Excellent business communication, Consulting, and Quality process skills Understand and assess the Source to target map documents and provide the recommendations if any Manage Enhancement on Informatica MDM/Oracle/SQL server coding skills Experience working independently, efficiently, and effectively under tight timelines and delivering results by critical deadlines Strong analytical and problem-solving skills What You Will Bring At least 7+ years of experience in configuring and designing Informatica MDM versions 10+ 7+ years of relevant data management consulting or industry experience (multiple master data domains, data modeling, data quality, and governance) 7+ years in-depth experience with Informatica MDM multi-domain edition and/ or C360 Bachelor s Degree or 10+ years equivalent professional experience Should have the ability to configure complex UI for Informatica MDM using the Provisioning tool or C360, including hierarchies Should be able to develop complex MDM Services and user exits using Java Deep understanding of MDM upstream and downstream integration Experience in Pub/Sub model and/ or other integration patterns Knowledgeable in Informatica Power Center/ Data Integration and Informatica Data Quality Experience in ActiveVOS workflow management/ Application Integration is a must Strong knowledge of SQL with Postgres, Oracle, or SQL Server, with an ability to write complex queries, develop functions and stored procedures Knowledge of Data sources in the Account Contact domains Excellent troubleshooting skills How we work This role operates in a hybrid capacity, blending the benefits of remote work with the advantages of in-person collaboration. For most roles, that will mean coming into an office a minimum of 3 days per week, however certain roles and/or teams may require more frequent in-office presence. Additional team-specific guidance and norms will be provided by your manager. -- Nutanix is an Equal Employment Opportunity and (in the U. S. ) an Affirmative Action employer. Qualified applicants are considered for employment opportunities without regard to race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, marital status, protected veteran status, disability status or any other category protected by applicable law. We hire and promote individuals solely on the basis of qualifications for the job to be filled. We strive to foster an inclusive working environment that enables all our Nutants to be themselves and to do great work in a safe and welcoming environment, free of unlawful discrimination, intimidation or harassment. As part of this commitment, we will ensure that persons with disabilities are provided reasonable accommodations. If you need a reasonable accommodation, please let us know by contacting [email protected] .

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer I/II Job Location : Pune, Maharashtra, India Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: Snowflake. Experience: 3-5 Years.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are looking for an experienced and versatile technical business analyst who will act as primary liaison between the business stakeholders and technology teams in developing user stories, identify system impacts and recommended solutions that meet customer needs and delivery of ROI on business strategies. The Technical BA demonstrates an informed knowledge of complex system architectures and information flows, preferably in the Order Management, Financial Systems, User Entitlements, Technical The Technical BA will be part of the D&B Technology team with the primary responsibility of user stories elaboration, develop basic screen prototypes, develop acceptance criteria for the business Strong documentation skills are mandatory as one of the primary responsibility is maintaining the Confluence Wiki with key business decisions, illustration of the business solutions and technical solutions and API documentation Provides technical business analysis support on specified projects to ensure user stories to accurately reflect requirements and ability to communicate business impacts of requested functionality to both business and technical teams. Create techno-functional user stories which will be used to gain business approval, ensuring that these will deliver the business requirements while complying with the architectural, data governance and security standards Plan and manage the requirements work for multiple stakeholders that may be made up of both technical and business representatives. Host meetings with stakeholders and meetings between business and technology related to requirements. Work collaboratively and enhance the best practices in the world of BA and guide fresh budding aspirants. Has good understanding of Stakeholder management, Business requirements management and give guidance in high-level time and effort estimations. Very good verbal and nonverbal communication skills with good presentation skills. Lead or participate in multiple projects by completing and supporting updating project documentation, managing project scope, adjusting schedules when necessary, determining daily priorities, ensuring efficient and on-time delivery of project tasks and milestones, following proper escalation paths, and managing customer and supplier relationships. Key Requirements Experience of 8+ years of experience as BA. Must have worked on complex financial systems or order management or pricing systems in any regulatory industry. Experience in Oracle, SQL, HTML, Python, JIRA. Exposure to AWS and Spring Boot would be an added advantage. Exposure to Power BI, Jira, Tableau, Informatica, XML Spy will be added advantage. Must have dealt with system complexity and be keen in simplifying the systems and requirements Must have experience in working in Scrum Agile. Prefer Agile Practitioner. Must Have Experience With JIRA Based Requirements Lifecycle Management Proven experience in creating and managing requirements in the form of User Stories Ability to make and influence decisions & implement changes and understands decision making principles and its direct effect on project ROI. Able to absorb and evaluate information from a variety of sources and apply this in identifying all system impacts in order to help find the most cost-effective solution. Show more Show less

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will be responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role is for a technical functional lead/developer supporting Veeva ClinOps vault (suite of applications). The role involves working closely with product managers, designers, and other engineers to create/maintain high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Participate in technical discussions related to the Veeva vault within Clinical Trial Management, Monitoring, and Engagement (CTMME) product team. Drive development/maintenance activities per release calendar by working with various members of the product team and business partners. Conduct user acceptance testing with the customer, including coordination of all feedback, resolution of issues, and acceptance of the study. Support requirements gathering and specification creation process for the development/maintenance work. Communicate potential risks and contingency plans with project management to ensure process compliance with all regulatory and Amgen procedural requirements. Participate and contribute to process product or standard methodologies initiatives and support developers and testers during the project lifecycle. Define, author, and present various architecture footprints i.e. Business, Logical, Integration, Security, Infrastructure, etc. Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations. Develop and implement unit tests, integration tests, and other testing strategies to ensure quality of the software following IS change control and GxP Validation process. Identify and resolve technical challenges/bugs & maintenance requests effectively. Work closely with multi-functional teams, including product management, design, and QA, to deliver high-quality software on time. Basic Qualifications: Masters degree and 1 to 3 years of experience in Computer Science (or) Bachelors degree and 3 to 5 years of experience in Computer Science(or) Diploma and 7 to 9 years of experience in Computer Science. Must-Have Skills: Proficiency in Veeva vault configuration/customization. Proficiency in programming languages such as Python, JavaScript preferred or other programming languages. Strong understanding of software development methodologies, including Agile and Scrum. Experience with version control systems like Git. Solid understanding & Proficient in writing SQL queries. Working knowledge of clinical trial processes specifically software validation. Good Problem-solving skills - Identifying and fixing bugs, adapting to changes. Excellent communication skills - Explaining design decisions, collaborating with teams. Good-to-Have Skills: Familiarity with relational databases (such as MySQL, SQL server, PostgreSQL etc.). Outstanding written and verbal communication skills, and ability to explain technical concepts to non-technical clients. Sharp learning agility, problem solving and analytical thinking . Experienced in implementing GxP projects. Extensive expertise in SDLC, including requirements, design, testing, data analysis, change control. Experience with API integrations such as MuleSoft. Experience with ETL Tools (Informatica, Databricks).

Posted 1 week ago

Apply

3.0 - 8.0 years

11 - 21 Lacs

Pune

Work from Office

Naukri logo

Hiring for Denodo Admin with 3+ years experience with below skills: Must Have: - Denodo admin logical data models, views & caching - ETL pipelines (Informatica/Talend) for EDW/data lakes, performance issues - SQL, Informatica, Talend, Big Data, Hive Required Candidate profile - Design, develop & maintain ETL pipelines using Informatica PowerCenter or Talend to extract, Hive - Optimize & troubleshoot complex SQL queries - Immediate Joiner is plus - Work from office is must

Posted 1 week ago

Apply

4.0 - 9.0 years

9 - 18 Lacs

Pune, Gurugram

Work from Office

Naukri logo

The first Data Engineer specializes in traditional ETL with SAS DI and Big Data (Hadoop, Hive). The second is more versatile, skilled in modern data engineering with Python, MongoDB, and real-time processing.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Senior Data Engineer Position Summary The Senior Data Engineer leads complex data engineering projects working on designing data architectures that align with business requirements This role focuses on optimizing data workflows managing data pipelines and ensuring the smooth operation of data systems Minimum Qualifications 8 Years overall IT experience with minimum 5 years of work experience in below tech skills Tech Skill Strong experience in Python Scripting and PySpark for data processing Proficiency in SQL dealing with big data over Informatica ETL Proven experience in Data quality and data optimization of data lake in Iceberg format with strong understanding of architecture Experience in AWS Glue jobs Experience in AWS cloud platform and its data services S3 Redshift Lambda EMR Airflow Postgres SNS Event bridge Expertise in BASH Shell scripting Strong understanding of healthcare data systems and experience leading data engineering teams Experience in Agile environments Excellent problem solving skills and attention to detail Effective communication and collaboration skills Responsibilities Leads development of data pipelines and architectures that handle large scale data sets Designs constructs and tests data architecture aligned with business requirements Provides technical leadership for data projects ensuring best practices and high quality data solutions Collaborates with product finance and other business units to ensure data pipelines meet business requirements Work with DBT Data Build Tool for transforming raw data into actionable insights Oversees development of data solutions that enable predictive and prescriptive analytics Ensures the technical quality of solutions managing data as it moves across environments Aligns data architecture to Healthfirst solution architecture Show more Show less

Posted 1 week ago

Apply

8.0 - 10.0 years

2 - 5 Lacs

Hyderābād

On-site

GlassDoor logo

At DuPont, our purpose is to empower the world with essential innovations to thrive. We work on things that matter. Whether it’s providing clean water to more than a billion people on the planet, producing materials that are essential in everyday technology devices from smartphones to electric vehicles, or protecting workers around the world. Discover the many reasons the world’s most talented people are choosing to work at DuPont. Why Join Us | DuPont Careers Skill Set/Experience Total 8-10 Years of Experience in areas of BW/HANA and at least 2 years as a Design Lead/Solution Architect Strong BW/HANA Technical expertise and experience in implementation of at least 2 large Scale BI Implementations Proven track record in design of BW/HANA Back End Architecture/Design as per defined business Requirements, In depth Knowledge of SAP BW and HANA Modelling from Extraction, Transformation and Modelling and data Integration. Hands on Experience Required. Understanding of new capabilities with BW on HANA and BW/4HANA would be preferred . Experience with Design/Development of solutions using Business Objects, SAP Analytics Cloud , Tableau or Power BI Understanding of Middleware Data Services tools like Business Objects Data Services, Informatica, Azure Data Factory, SAP PO etc, Deep understanding of ABAP backed by at least 1-2 years of prior experience in design/development. Understanding of at least one of the SAP Functional Modules (like PP, FI, CO, MM, SD, WM, PM, PS) is required. Exposure of BW integration with BPC, SAP APO/SCM, Informatica/BODS would be preferred Knowledge of SCRUM or other agile project methodologies Strong Communication and interpersonal Skills and Team Player Key Responsibilities Responsible for Design, Development, Unit-Testing, Documentation, Migration , Data Loads and deployment /Transition of Solutions in areas of BW /HANA as per defined business requirements and Project Timelines/Milestones. Translate business requirements into Functional/Technical Design Specifications consistent with Best Practices and delivery of sustainable solutions Work closely with BI Architecture Team and understand Architecture/ Design Principles/guidelines for Design and Build Reviews of the Design Specifications and development as per Architecture/Design Guidelines and Standards Work closely with BI COE Team for understanding of Requirements, Data Flow Diagrams (High Level Design) and Functional Designs to build BW solutions Actively Participate in different development Test Cycles by providing support for Defect Resolutions and Cutover /Go Live Support Activities Practice and Advance Dupont Core values and local processes/guidelines Join our Talent Community to stay connected with us! On May 22, 2024, we announced a plan to separate our Electronics and Water businesses in a tax-free manner to its shareholders. On January 15, 2025, we announced that we are targeting November 1, 2025, for the completion of the intended separation of the Electronics business (the “Intended Electronics Separation”)*. We also announced that we would retain the Water business. We are committed to ensuring a smooth and successful separation process for the Future Electronics business. We look forward to welcoming new talent interested in contributing to the continued success and growth of our evolving organization. (1)The separation transactions are subject to satisfaction of customary conditions, including final approval by DuPont's Board of Directors, receipt of tax opinion from counsel, the filing and effectiveness of Form 10 registration statements with the U.S. Securities and Exchange Commission, applicable regulatory approvals, and satisfactory completion of financing. For further discussion of risks, uncertainties and assumptions that could impact the achievement, expected timing and intended benefits of the separation transactions, see DuPont’s announcement . DuPont is an equal opportunity employer. Qualified applicants will be considered without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability or any other protected class. If you need a reasonable accommodation to search or apply for a position, please visit our Accessibility Page for Contact Information. DuPont offers a comprehensive pay and benefits package. To learn more visit the Compensation and Benefits page.

Posted 1 week ago

Apply

10.0 years

4 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

Title - Data & Analytics Consultant - OAC/OAS + ODI Experience - 10 to 13 years Role - Permanent Responsibilities: An Individual Contributor who has worked with ERP/EBS systems as sources with sound knowledge of Dimensional Modeling, Data Warehousing, implementation & Extensions of Oracle Business Intelligence Applications Experience in designing and development of data pipelines from variety of source systems into Data warehouse or lakehouse using ODI. Experience in Informatica Power Center or any other ETL/ELT technologies would be a plus. Possess hands – on experience to the Semantic modeling / metadata (RPD) modeling very well, developing, customizing, maintaining and support Complex Analysis, Data Visualizations and BI Publisher Reports in Oracle Analytics Cloud or Oracle Analytics Server as per requirement of the business users. Should have good experience on SQL. Should have good communication skills.

Posted 1 week ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview Lead the design and development of scalable ETL processes using Informatica, ensuring data integrity and quality throughout the data lifecycle. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications for ETL solutions. Optimize existing ETL workflows to improve performance and reduce processing times, leveraging best practices in data engineering. Implement data governance and data quality measures to ensure compliance with organizational standards. Mentor junior developers, providing guidance on ETL best practices and career development. Monitor and troubleshoot ETL processes, conducting root cause analysis on issues and implementing effective solutions. Responsibilities Be responsible for the design, configuration, build, and testing of new interface requirements/change requests initiated by sector projects, leveraging Informatica PowerCenter, DIH, or IICS platform Support production Informatica jobs and processes to ensure a high level of stability through operational excellence Troubleshoot production issues and communicate incident status via established templates to various levels of the organization Demonstrates continuous improvement mindset and proactively identifies opportunities for improvement Ensure all quality, and compliance standards are met Stay up-to-date with technology and industry trends by proactively participating in training, self-study, conferences, webcasts, user group participation, reading or other relevant means Qualifications Bachelor's degree in Computer Science or relevant disciplines with an IT emphasis is required 15+ years of experience focused on Integration project deliveries, upgrades, roll outs and support 10+ years of experience of integration and ETL development experience with Informatica PowerCenter 10+ year of SQL experience Informatica PowerCenter and Informatica Cloud expertise. Strong SQL skills and experience with database management systems. Familiarity with data visualization tools, such as Tableau or Power BI, is a plus. Experience with Python or Java for data processing is advantageous. Experience with Rest API, JSON, and XML is preferred Experience with Informatica Intelligent Cloud Services (IICS) is preferred Experience with Informatica Data Integration Hub is preferred Knowledge and experience in the following tools and technologies are nice to have: NoSQL DB, Azure DevOps, GIT Good written and oral communication skills Ability to multi-task and prioritize effectively in a fast-paced environment Excellent interpersonal skills with the ability to establish working relationships with individuals at varying levels within the organization Demonstrates attention to detail, organization, and timeliness in order to meet customer service expectations Demonstrates effective problem-solving skills Be creative and able to operate in a start-up mindset Willingness to learn Partner across teams to solve complex problems Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Primary skills:Technology->Data Management - Data Integration Administration->Informatica Administration A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Show more Show less

Posted 1 week ago

Apply

0 years

4 - 7 Lacs

Gurgaon

On-site

GlassDoor logo

A Software Engineer is curious and self-driven to build and maintain multi-terabyte operational marketing databases and integrate them with cloud technologies. Our databases typically house millions of individuals and billions of transactions and interact with various web services and cloud-based platforms. Once hired, the qualified candidate will be immersed in the development and maintenance of multiple database solutions to meet global client business objectives Job Description: Key responsibilities: Have 2 – 4 yrs exp Will work in close Supervision of Tech Leads/ Lead Devs Should able to understand detailed design with minimal explanation. Individual Contributor. Resource will able to perform mid to complex level tasks with minimal supervision. Senior team members will peer review assigned tasks. Build and configure our Marketing Database/Data environment platform by integrating feeds as per detailed design/transformation logic. Good knowledge of Unix scripting &/or Python Must have strong knowledge in SQL Good understanding of ETL (Talend, Informatica, Datastage, Ab Initio etc) as well as database skills (Oracle, SQL server, Teradata, Vertica, redshift, Snowflake, Big query, Azure DW etc). Fair understanding of relational databases, stored procs etc. Experience in Cloud computing (one or more of AWS, Azure, GCP) will be plus. Less supervision & guidance from senior resources will be required. Location: DGS India - Gurugram - Golf View Corporate Towers Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Requirements Description and Requirements This position is responsible for design, implementation, and support of MetLife's enterprise data management and integration systems, the underlying infrastructure, and integrations with other enterprise systems and applications using AIX, Linux, or Microsoft Technologies. Job Responsibilities Provide technical expertise in the planning, engineering, design, implementation and support of data management and integration system infrastructures and technologies. This includes the systems operational procedures and processes Partner with the Capacity Management, Production Management, Application Development Teams and the Business to ensure customer expectations are maintained and exceeded Participate in the evaluation and recommendation of new products and technologies, maintain knowledge of emerging technologies for application to the enterprise Identify and resolve complex data management and integration system issues (Tier 3 support) utilizing product knowledge and structured troubleshooting tools and techniques Support Disaster Recovery implementation and testing as required Experience in design and developing Automation/Scripting (shell, Perl, PowerShell, Python, Java…) Good decision-making skills Take ownership for the deliverables from the entire team Strong collaboration with leadership groups Learn new technologies based on demand Coach other team members and bring them up to speed Track project status working with team members and report to leadership Participate in cross-departmental efforts Leads initiatives within the community of practice Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor's degree in computer science, Information Systems, or related field. Experience 3+ years of total experience and at least 2+ years of experience in Informatica applications implementation and support of data management and integration system infrastructures and technologies. This includes the system's operational procedures and processes. Participate in the evaluation and recommendation of new products and technologies, maintain knowledge of emerging technologies for application to the enterprise. Good understanding in Disaster Recovery implementation and testing Design and developing Automation/Scripting (shell, Perl, PowerShell, Python, Java…) Informatica PowerCenter Informatica PWX Informatica DQ Informatica DEI Informatica B2B/DX Informatica MFT Informatica MDM Informatica ILM Informatica Cloud (IDMC/IICS) Ansible (Automation) Operating System Knowledge (Linux/Windows/AIX) Azure Dev Ops Pipeline Knowledge Python and/or Powershell Agile SAFe for Teams Enterprise Scheduling Knowledge (Maestro) Troubleshooting Communications CP4D Datastage Mainframe z/OS Knowledge Open Shift Elastic Experience in creating and working on Service Now tasks/tickets About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Company Gentrack provides leading utilities across the world with innovative cleantech solutions. The global pace of change is accelerating, and utilities need to rebuild for a more sustainable future. Working with some of the world’s biggest energy and water companies, as well as innovative challenger brands, we are helping companies reshape what it means to be a utilities business. We are driven by our passion to create positive impact. That is why utilities rely on us to drive innovation, deliver great customer experiences, and secure profits. Together, we are renewing utilities. Our Values and Culture Colleagues at Gentrack are one big team, working together to drive efficiency in two of the planet’s most precious resources, energy, and water. We are passionate people who want to drive change through technology and believe in making a difference. Our values drive decisions and how we interact and communicate with customers, partners, shareholders, and each other. Our core values are~ Ø Respect for the planet Ø Respect for our customers and Ø Respect for each other Gentrackers are a group of smart thinkers and dedicated doers. We are a diverse team who love our work and the people we work with and who collaborate and inspire each other to deliver creative solutions that make our customers successful. We are a team that shares knowledge, asks questions, raises the bar, and are expert advisers. At Gentrack we care about doing honest business that is good for not just customers but families, communities, and ultimately the planet. Gentrackers continuously look for a better way and drive quality into everything they do. This is a truly exciting time to join Gentrack with a clear growth strategy and a world class leadership team working to fulfil Gentrack’s global aspirations by having the most talented people, an inspiring culture, and a technology first, people centric business. The Opportunity We are seeking an experienced Data Migration Manager to lead our global data migration practice and drive successful delivery of complex data migrations in our Customers transformation projects. The Data Migration Manager will be responsible for overseeing the strategic planning, execution, and management of data migration initiatives across our global software implementation projects. This critical role ensures seamless data transition, quality, and integrity for our clients. In line with our value of ‘Respect for the Planet’, we encourage all our people to provide leadership through participating in our sustainability initiatives, including activities ran by the regional GSTF. Including encouraging our people to engage and drive sustainable behaviours, supporting organisational change and global sustainability programs. The Specifics Lead and manage a global team of data migration experts, providing strategic direction and professional development Develop and maintain comprehensive data migration methodologies and best practices applicable to utility sector software implementations Design and implement robust data migration strategies that address the unique challenges of utility industry data ecosystems Collaborate with solution architects, project managers, and client teams to define detailed data migration requirements and approaches Provide guidance and advice across the entire data migration lifecycle, including~ Source data assessment and profiling Data cleansing and transformation strategies Migration planning and risk mitigation Execution of migration scripts and processes Validation, reconciliation and quality assurance of migrated data Ensure compliance with data protection regulations and industry-specific standards across different global regions Develop and maintain migration toolsets and accelerators to improve efficiency and repeatability of migration processes Create comprehensive documentation, migration playbooks, and standard operating procedures Conduct regular performance reviews of migration projects and implement continuous improvement initiatives Manage and mitigate risks associated with complex data migration projects Provide technical leadership and mentorship to the data migration team What we're looking for (you don’t need to be a guru at all, we’re looking forward to coaching and collaborating with you)~ Proficiency in data migration tools (e.g., Informatica, Talend, Microsoft SSIS) Experience with customer information system (CIS) and/or billing system migrations Knowledge of data governance frameworks Understanding of utility industry data models and integration challenges Familiarity with cloud migration strategies, including Salesforce Strategic thinking and innovative problem-solving Strong leadership and team management capabilities Excellent written and verbal communication skills across technical and non-technical audiences Ability to oversee a number of complex, globally dispersed projects Cultural sensitivity and adaptability What we offer in return~ Personal growth – in leadership, commercial acumen and technical excellence To be part of a global, winning high growth organization – with a career path to match A vibrant, culture full of people passionate about transformation and making a difference -with a one team, collaborative ethos A competitive reward package that truly awards our top talent A chance to make a true impact on society and the planet Gentrack want to work with the best people, no matter their background. So, if you are passionate about learning new things and keen to join the mission, you will fit right in. Show more Show less

Posted 1 week ago

Apply

7.0 - 12.0 years

22 - 37 Lacs

Mumbai, Navi Mumbai, Mumbai (All Areas)

Hybrid

Naukri logo

Hiring: Data Engineering Senior Software Engineer / Tech Lead / Senior Tech Lead - Hybrid (3 Days from office) | Shift: 2 PM 11 PM IST - Experience: 5 to 12+ years (based on role & grade) Open Grades/Roles : Senior Software Engineer : 58 Years Tech Lead : 7–10 Years Senior Tech Lead : 10–12+ Years Job Description – Data Engineering Team Core Responsibilities (Common to All Levels) : Design, build and optimize ETL/ELT pipelines using tools like Pentaho , Talend , or similar Work on traditional databases (PostgreSQL, MSSQL, Oracle) and MPP/modern systems (Vertica, Redshift, BigQuery, MongoDB) Collaborate cross-functionally with BI, Finance, Sales, and Marketing teams to define data needs Participate in data modeling (ER/DW/Star schema) , data quality checks , and data integration Implement solutions involving messaging systems (Kafka) , REST APIs , and scheduler tools (Airflow, Autosys, Control-M) Ensure code versioning and documentation standards are followed (Git/Bitbucket) Additional Responsibilities by Grade Senior Software Engineer (5–8 Yrs) : Focus on hands-on development of ETL pipelines, data models, and data inventory Assist in architecture discussions and POCs Good to have: Tableau/Cognos, Python/Perl scripting, GCP exposure Tech Lead (7–10 Yrs) : Lead mid-sized data projects and small teams Decide on ETL strategy (Push Down/Push Up) and performance tuning Strong working knowledge of orchestration tools, resource management, and agile delivery Senior Tech Lead (10–12+ Yrs) : Drive data architecture , infrastructure decisions , and internal framework enhancements Oversee large-scale data ingestion, profiling, and reconciliation across systems Mentoring junior leads and owning stakeholder delivery end-to-end Advantageous: Experience with AdTech/Marketing data , Hadoop ecosystem (Hive, Spark, Sqoop) - Must-Have Skills (All Levels): ETL Tools: Pentaho / Talend / SSIS / Informatica Databases: PostgreSQL, Oracle, MSSQL, Vertica / Redshift / BigQuery Orchestration: Airflow / Autosys / Control-M / JAMS Modeling: Dimensional Modeling, ER Diagrams Scripting: Python or Perl (Preferred) Agile Environment, Git-based Version Control Strong Communication and Documentation

Posted 1 week ago

Apply

6.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Should have 5 years of experience in Abnitio, ETL Informatica, AWS. Develop, test, and maintain ETL workflows using Informatica or Ab Initio under senior guidance. Monitor and manage batch jobs using Autosys or Control-M. Write SQL queries for data extraction and transformation. Collaborate with QA, BA, and senior team members for issue resolution. Document code, job schedules, and workflows. Assist in basic performance monitoring using Dynatrace. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

About Hakkoda Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are looking for people experienced with data architecture, design and development of database mapping and migration processes. This person will have direct experience optimizing new and current databases, data pipelines and implementing advanced capabilities while ensuring data integrity and security. Ideal candidates will have strong communication skills and the ability to guide clients and project team members. Acting as a key point of contact for direction and expertise. Key Responsibilities Design, develop, and optimize database architectures and data pipelines. Ensure data integrity and security across all databases and data pipelines. Lead and guide clients and project team members, acting as a key point of contact for direction and expertise. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Manage and support large-scale technology programs, ensuring they meet business objectives and compliance requirements. Develop and implement migration, dev/ops, and ETL/ELT ingestion pipelines using tools such as DataStage, Informatica, and Matillion. Utilize project management skills to work effectively within Scrum and Agile Development methods. Create and leverage metrics to develop actionable and measurable insights, influencing business decisions. Qualifications 7+ years of proven work experience in data warehousing, business intelligence (BI), and analytics. 3+ years of experience as a Data Architect. 3+ years of experience working on Cloud platforms (AWS, Azure, GCP). Bachelor's Degree (BA/BS) in Computer Science, Information Systems, Mathematics, MIS, or a related field. Strong understanding of migration processes, dev/ops, and ETL/ELT ingestion pipelines. Proficient in tools such as DataStage, Informatica, and Matillion. Excellent project management skills and experience with Scrum and Agile Development methods. Ability to develop actionable and measurable insights and create metrics to influence business decisions. Previous consulting experience managing and supporting large-scale technology programs. Nice to Have 6-12 months of experience working with Snowflake. Understanding of Snowflake design patterns and migration architectures. Knowledge of Snowflake roles, user security, and capabilities like Snowpipe. Proficiency in SQL scripting. Cloud experience on AWS (Azure and GCP are also beneficial) Python scripting skills. Benefits Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? 🚀 💻 Apply today👇 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Req ID: 328454 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Business Intelligence Senior Specialist to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Mandatory Skills : Oracle SQL , PL/SQL, Informatica, Linux Scripting , Tidal Desired Skills : , Python scripting , Autosys - Responsible for workload prioritization and management of resources, both on service requests and small projects. Maintaining and Providing the Status to mgmt. and Onshore leads - Expert in Architecture, design, develop and deliver ETL applications using Informatica , Teradata, Oracle , Tidal and Linux Scripting Hands on code development, source code control, specification writing and production implementation. Participate in requirement gathering sessions, guide the design & development by providing insights of data sources and peer reviews. Participate and guide data integration solutions that are needed to fill the data gaps Debug data quality issues by analyzing the upstream sources and provide guidance to data integration team resolutions Closely work with DBAs to fix performance bottlenecks Participate in technology governance groups that defines policies, best practices and make design decisions in on-going projects Mentor junior developers regarding best practices and technology stacks used to build the application - Work closely with Operations, Teradata administration teams for code migrations and production support Provide Resource and effort estimates for EDW - ETL & Extract projects. - Experience working as part of a global development team. Shoudl be able to bring innovation to provide value add to the customer . About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Position: Database Location: Noida, India www.SEW.ai Who We Are: SEW, with its innovative and industry-leading cloud platforms, delivers the best Digital Customer Experiences (CX) and Workforce Experiences (WX), powered by AI, ML, and IoT Analytics to the global energy, water, and gas providers. At SEW, the vision is to Engage, Empower, and Educate billions of people to save energy and water. We partner with businesses to deliver platforms that are easy-to-use, integrate seamlessly, and help build a strong technology foundation that allows them to become future- ready. Searching for your dream job? We are a true global company that values building meaningful relationships and maintaining a passionate work environment while fostering innovation and creativity. At SEW, we firmly believe that each individual contributes to our success and in return, we provide opportunities from them to learn new skills and build a rewarding professional career. A Couple of Pointers: • We are the fastest growing company with over 420+ clients and 1550+ employees. • Our clientele is based out in the USA, Europe, Canada, Australia, Asia Pacific, Middle East • Our platforms engage millions of global users, and we keep adding millions every month. • We have been awarded 150+ accolades to date. Our clients are continually awarded by industry analysts for implementing our award-winning product. • We have been featured by Forbes, Wall Street Journal, LA Times for our continuous innovation and excellence in the industry. Who we are looking? An ideal candidate who must demonstrate in-depth knowledge and understanding of RDBMS concepts and experienced in writing complex queries and data integration processes in SQL/TSQL and NoSQL. T his individual will be responsible for helping the design, development and implementation of new and existing applications. Roles and Responsibilities: • Reviews the existing database design and data management procedures and provides recommendations for improvement • Responsible for providing subject matter expertise in design of database schemes and performing data modeling (logical and physical models), for product feature enhancements as well as extending analytical capabilities. •Develop technical documentation as needed. • Architect, develop, validate and communicate Business Intelligence (BI) solutions like dashboards, reports, KPIs, instrumentation, and alert tools. • Define data architecture requirements for cross-product integration within and across cloud-based platforms. • Analyze, architect, develop, validate and support integrating data into the SEW platform from external data source; Files (XML, CSV, XLS, etc.), APIs (REST, SOAP), RDBMS. • Perform thorough analysis of complex data and recommend actionable strategies. • Effectively translate data modeling and BI requirements into the design process. • Big Data platform design i.e. tool selection, data integration, and data preparation for predictive modeling • Required Skills: • Minimum of 4-6 years of experience in data modeling (including conceptual, logical and physical data models. • 2-3 years of experience in Extraction, Transformation and Loading ETL work using data migration tools like Talend, Informatica, Datastage, etc. • 4-6 years of experience as a database developer in Oracle, MS SQL or other enterprise database with focus on building data integration processes. • Candidate should have any NoSql technology exposure preferably MongoDB. • Experience in processing large data volumes indicated by experience with Big Data platforms (Teradata, Netezza, Vertica or Cloudera, Hortonworks, SAP HANA, Cassandra, etc.). • Understanding data warehousing concepts and decision support systems. • Ability to deal with sensitive and confidential material and adhere to worldwide data security and Experience writing documentation for design and feature requirements. • Experience developing data-intensive applications on cloud-based architectures and infrastructures such as AWS, Azure etc. • Excellent communication and collaboration skill Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

20 - 30 Lacs

Hyderabad, Pune

Hybrid

Naukri logo

Job Summary: We are seeking a highly skilled Informatica MDM Developer to join our data integration and management team. The ideal candidate will have extensive experience in Informatica Master Data Management (MDM) solutions and a deep understanding of data quality, data governance, and master data modeling. Key Responsibilities: Design, develop, and deploy Informatica MDM solutions (including Hub, IDD, SIF, and MDM Hub configurations). Work closely with data architects, business analysts, and stakeholders to understand master data requirements. Configure and manage Trust, Merge, Survivorship rules, and Match/Merge logic. Implement data quality (DQ) checks and profiling using Informatica DQ tools. Develop batch and real-time integration using Informatica MDM SIF APIs and ETL tools (e.g., Informatica PowerCenter). Monitor and optimize MDM performance and data processing. Document MDM architecture, data flows, and integration touchpoints. Troubleshoot and resolve MDM issues across environments (Dev, Test, UAT, Prod). Support data governance and metadata management initiatives. Required Skills: Strong hands-on experience with Informatica MDM (10.x or later) . Proficient in match/merge rules , data stewardship , hierarchy management , and SIF APIs . Experience with Informatica Data Quality (IDQ) is a plus. Solid understanding of data modeling , relational databases , and SQL . Familiarity with REST/SOAP APIs , web services , and real-time data integration. Experience in Agile/Scrum environments. Excellent problem-solving and communication skills.

Posted 1 week ago

Apply

Exploring Informatica Jobs in India

The informatica job market in India is thriving with numerous opportunities for skilled professionals in this field. Companies across various industries are actively hiring informatica experts to manage and optimize their data integration and data quality processes.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for informatica professionals in India varies based on experience and expertise: - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum

Career Path

A typical career progression in the informatica field may include roles such as: - Junior Developer - Informatica Developer - Senior Developer - Informatica Tech Lead - Informatica Architect

Related Skills

In addition to informatica expertise, professionals in this field are often expected to have skills in: - SQL - Data warehousing - ETL tools - Data modeling - Data analysis

Interview Questions

  • What is Informatica and why is it used? (basic)
  • Explain the difference between a connected and unconnected lookup transformation. (medium)
  • How can you improve the performance of a session in Informatica? (medium)
  • What are the various types of cache in Informatica? (medium)
  • How do you handle rejected rows in Informatica? (basic)
  • What is a reusable transformation in Informatica? (basic)
  • Explain the difference between a filter and router transformation in Informatica. (medium)
  • What is a workflow in Informatica? (basic)
  • How do you handle slowly changing dimensions in Informatica? (advanced)
  • What is a mapplet in Informatica? (medium)
  • Explain the difference between an aggregator and joiner transformation in Informatica. (medium)
  • How do you create a mapping parameter in Informatica? (basic)
  • What is a session and a workflow in Informatica? (basic)
  • What is a rank transformation in Informatica and how is it used? (medium)
  • How do you debug a mapping in Informatica? (medium)
  • Explain the difference between static and dynamic cache in Informatica. (advanced)
  • What is a sequence generator transformation in Informatica? (basic)
  • How do you handle null values in Informatica? (basic)
  • Explain the difference between a mapping and mapplet in Informatica. (basic)
  • What are the various types of transformations in Informatica? (basic)
  • How do you implement partitioning in Informatica? (medium)
  • Explain the concept of pushdown optimization in Informatica. (advanced)
  • How do you create a session in Informatica? (basic)
  • What is a source qualifier transformation in Informatica? (basic)
  • How do you handle exceptions in Informatica? (medium)

Closing Remark

As you prepare for informatica job opportunities in India, make sure to enhance your skills, stay updated with the latest trends in data integration, and approach interviews with confidence. With the right knowledge and expertise, you can excel in the informatica field and secure rewarding career opportunities. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies