Home
Jobs

2257 Informatica Jobs - Page 43

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Director Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Job Description & Summary – ETL Data Engineer Minimum 3 years of professional experience with working knowledge in a Data and Analytics role with a Global organization Mandatory Hands-on Development Experience in ETL tool : Informatica Power center OR IICS Should have implemented end to end ETL life cycle, preparing ETL design frameworks and execution Must have rich experience building Operational Data stores, Data marts and Enterprise Data warehouse Must have very good SQL skills ( Specifically in Oracle, MySQL) Should be able to create and execute ETL designs and test cases should be able to write complex SQL queries for testing/analysis depending upon the functional/technical requirement. Should have worked on performance optimization, error handling, writing stored procedures, etc. Demonstrate ability to communicate effectively with both Technical & Business stakeholders Should have Understanding of data modelling, Data warehousing concepts Mandatory Skill Sets ETL Developer Preferred Skill Sets ETL Developer Years Of Experience Required 2+ Education Qualification BE/BTech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills ETL Development Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Coaching and Feedback, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion {+ 24 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Key Responsibilities ETL & BI Testing: Manage the testing of ETL processes, data pipelines, and BI reports to ensure accuracy and reliability. Develop and execute test strategies, test plans, and test cases for data validation. Perform data reconciliation, transformation validation, and SQL-based testing to ensure data correctness. Validate reports and dashboards built using BI tools (Power BI, Tableau). Automate ETL testing where applicable using Python, Selenium, or other automation tools. Identify and log defects, track issues, and ensure timely resolution. Collaborate with business stakeholders to understand data requirements and reporting needs. Assist in documenting functional and non-functional requirements for data transformation and reporting. Support data mapping, data profiling, and understanding business rules applied to datasets. Participate in requirement-gathering sessions and provide inputs on data validation needs. Required Skills & Experience 6–8 years of experience in ETL, Data Warehouse, and BI testing. Strong experience with SQL, data validation techniques, and database testing. Hands-on experience with ETL tools (Informatica, Talend, SSIS, or similar). Proficiency in BI tools like Power BI, Tableau for report validation. Good knowledge of data modeling, star schema, and OLAP concepts. Mentor a team of ETL/BI testers and provide guidance on testing best practices. Coordinate with developers, BAs, and business users to ensure end-to-end data validation. Define QA processes, best practices, and automation strategies to improve testing efficiency. Experience in data reconciliation, transformation logic validation, and data pipeline testing. Experience in Insurance domain is an added advantage. Automation skills for data and report testing (Python, Selenium, or ETL testing frameworks) are a plus. Experience in understanding and documenting business & data requirements. Ability to work with business users to gather and analyze reporting needs. Strong analytical and problem-solving skills. Excellent communication and stakeholder management abilities. Experience with Agile/Scrum methodologies and working in a cross-functional team. Preferred Qualifications Experience in cloud-based data platforms (AWS, Azure, GCP) is a plus. ISTQB or equivalent certification in software testing is preferred. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Data Tester Job Description Highlights: 5 plus years experience in data testing ETL Testing: Validating the extraction, transformation, and loading (ETL) of data from various sources. Data Validation: Ensuring the accuracy, completeness, and integrity of data in databases and data warehouses. SQL Proficiency: Writing and executing SQL queries to fetch and analyze data. Data Modeling: Understanding data models, data mappings, and architectural documentation. Test Case Design: Creating test cases, test data, and executing test plans. Troubleshooting: Identifying and resolving data-related issues. Dashboard Testing: Validating dashboards for accuracy, functionality, and user experience. Collaboration: Working with developers and other stakeholders to ensure data quality and functionality. Primary Responsibilities Dashboard Testing Components: Functional Testing: Simulating user interactions and clicks to ensure dashboards are functioning correctly. Performance Testing: Evaluating dashboard responsiveness and load times. Data Quality Testing: Verifying that the data displayed on dashboards is accurate, complete, and consistent. Usability Testing: Assessing the ease of use and navigation of dashboards. Data Visualization Testing: Ensuring charts, graphs, and other visualizations are accurate and present data effectively. Security Testing: Verifying that dashboards are secure and protect sensitive data. Tools and Technologies: SQL: Used for querying and validating data. Hands on snowflake ETL Tools: Tools like Talend, Informatica, or Azure Data Factory used for data extraction, transformation, and loading. Data Visualization Tools: Tableau, Power BI, or other BI tools used for creating and testing dashboards. Testing Frameworks: Frameworks like Selenium or JUnit used for automating testing tasks. Cloud Platforms: AWS platforms used for data storage and processing. Hands on Snowflake experience HealthCare Domain knowledge is plus point. Secondary Skills Automation framework, Life science domain experience. UI Testing, API Testing Any other ETL Tools About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures Of Outcomes TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Code Outputs Expected: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure Define and govern the configuration management plan. Ensure compliance from the team. Test Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Execute and monitor the release process. Design Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface With Customer Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications Obtain relevant domain and technology certifications. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes: Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures of Outcomes: TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Outputs Expected: Code: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation: Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure: Define and govern the configuration management plan. Ensure compliance from the team. Test: Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance: Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project: Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects: Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate: Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release: Execute and monitor the release process. Design: Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface with Customer: Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team: Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications: Obtain relevant domain and technology certifications. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Skills scala,Python,Pyspark Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Requirements Description and Requirements This position is responsible for design, implementation, and support of MetLife's enterprise data management and integration systems, the underlying infrastructure, and integrations with other enterprise systems and applications using AIX, Linux, or Microsoft Technologies. Job Responsibilities Provide technical expertise in the planning, engineering, design, implementation and support of data management and integration system infrastructures and technologies. This includes the systems operational procedures and processes Partner with the Capacity Management, Production Management, Application Development Teams and the Business to ensure customer expectations are maintained and exceeded Participate in the evaluation and recommendation of new products and technologies, maintain knowledge of emerging technologies for application to the enterprise Identify and resolve complex data management and integration system issues (Tier 3 support) utilizing product knowledge and structured troubleshooting tools and techniques Support Disaster Recovery implementation and testing as required Experience in design and developing Automation/Scripting (shell, Perl, PowerShell, Python, Java…) Good decision-making skills Take ownership for the deliverables from the entire team Strong collaboration with leadership groups Learn new technologies based on demand Coach other team members and bring them up to speed Track project status working with team members and report to leadership Participate in cross-departmental efforts Leads initiatives within the community of practice Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor's degree in computer science, Information Systems, or related field. Experience 10+ years of total experience and at least 7+ years of experience in Informatica applications implementation and support of data management and integration system infrastructures and technologies. This includes the system's operational procedures and processes. Participate in the evaluation and recommendation of new products and technologies, maintain knowledge of emerging technologies for application to the enterprise. Good understanding in Disaster Recovery implementation and testing Design and developing Automation/Scripting (shell, Perl, PowerShell, Python, Java…) Informatica PowerCenter Informatica PWX Informatica DQ Informatica DEI Informatica B2B/DX Informatica MFT Informatica MDM Informatica ILM Informatica Cloud (IDMC/IICS) Ansible (Automation) Operating System Knowledge (Linux/Windows/AIX) Azure Dev Ops Pipeline Knowledge Python and/or Powershell Agile SAFe for Teams Enterprise Scheduling Knowledge (Maestro) Troubleshooting Communications CP4D Datastage Mainframe z/OS Knowledge Open Shift Elastic Experience in creating and working on Service Now tasks/tickets About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Data Tester Job Description Highlights 5 plus years experience in data testing ETL Testing: Validating the extraction, transformation, and loading (ETL) of data from various sources. Data Validation: Ensuring the accuracy, completeness, and integrity of data in databases and data warehouses. SQL Proficiency: Writing and executing SQL queries to fetch and analyze data. Data Modeling: Understanding data models, data mappings, and architectural documentation. Test Case Design: Creating test cases, test data, and executing test plans. Troubleshooting: Identifying and resolving data-related issues. Dashboard Testing: Validating dashboards for accuracy, functionality, and user experience. Collaboration: Working with developers and other stakeholders to ensure data quality and functionality. Primary Responsibilities Dashboard Testing Components: Functional Testing: Simulating user interactions and clicks to ensure dashboards are functioning correctly. Performance Testing: Evaluating dashboard responsiveness and load times. Data Quality Testing: Verifying that the data displayed on dashboards is accurate, complete, and consistent. Usability Testing: Assessing the ease of use and navigation of dashboards. Data Visualization Testing: Ensuring charts, graphs, and other visualizations are accurate and present data effectively. Security Testing: Verifying that dashboards are secure and protect sensitive data. Tools And Technologies SQL: Used for querying and validating data. Hands on snowflake ETL Tools: Tools like Talend, Informatica, or Azure Data Factory used for data extraction, transformation, and loading. Data Visualization Tools: Tableau, Power BI, or other BI tools used for creating and testing dashboards. Testing Frameworks: Frameworks like Selenium or JUnit used for automating testing tasks. Cloud Platforms: AWS platforms used for data storage and processing. Hands on Snowflake experience HealthCare Domain knowledge is plus point. Secondary Skills Automation framework, Life science domain experience. UI Testing, API Testing Any other ETL Tools Show more Show less

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Chennai

Work from Office

Naukri logo

Job Purpose: We are seeking an experienced Senior Data Migration Engineer for the payment system project, you will be responsible for designing, developing, and executing data migration strategies for transferring data from the temporary PostgreSQL source database to the target system and focus on extracting, transforming, and loading (ETL) data efficiently and accurately while ensuring data integrity and security during the migration. with strong communication skills. Requirements: We are seeking an experienced Senior Data Migration Engineer for the payment system project, you will be responsible for designing, developing, and executing data migration strategies for transferring data from the temporary PostgreSQL source database to the target system and focus on extracting, transforming, and loading (ETL) data efficiently and accurately while ensuring data integrity and security during the migration. with strong communication skills. The ideal candidate should have: Key Responsibilities: As a Senior Data Migration Engineer for the payment system project, you will be responsible for designing, developing, and executing data migration strategies for transferring data from the temporary PostgreSQL source database to the target system. You will focus on extracting, transforming, and loading (ETL) data efficiently and accurately while ensuring data integrity and security during the migration. You will work closely with the business and technical teams to create ETL mapping documents, design ETL jobs, and ensure that data transformation aligns with the target data model. Post-migration, you will perform detailed data validation to confirm the successful migration of records, data types, and mandatory fields. You will also lead post-migration activities such as data validation, consistency checks, and troubleshooting any discrepancies in the data migration process. Collaboration with developers, DBAs, and business stakeholders will be essential to ensure that data migration is carried out efficiently, securely, and in line with business requirements. Qualifications: Bachelor's degree and 4+ years of experience in related working with Informatica or Talent , Python , PostgreSQL Excellent verbal and written communication skills. Strong quantitative and analytical skills with accuracy and attention to detail Ability to work well independently with minimal supervision and can manage multiple priorities

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Work Type: Full-time | Hybrid (2–3 days WFO) Working Hours: Flexible; collaboration with India, US, and Colombia teams Compensation (Yearly): ₹2,200,000 – ₹2,400,000 Team Leadership: Required (Minimum 2 reportees) About The Client We’re hiring for a global digital solutions company known for its expertise in cloud, data engineering, and quality assurance, with a strong focus on innovation and agile delivery across BFSI and Healthcare sectors. About The Role We are hiring for a leadership QA role responsible for managing both manual and automated testing across ETL pipelines, UI, and databases, while leading a team and collaborating cross-functionally across global teams. The role involves implementing best practices in testing, owning quality strategy documentation, and integrating with CI/CD tools. Key Responsibilities Lead and manage a QA team (4+ members) handling ETL, UI, DB, and end-to-end testing. Analyze requirements, create detailed test cases and test data (manual & automated). Validate ETL workflows and transformation logic using advanced SQL (Snowflake/Postgres/MySQL). Create and maintain automation scripts using BDD (Gherkin/Behave, Pytest) in Python. Integrate automation frameworks with CI/CD tools like Jenkins, GitHub, Rundeck. Develop and maintain documentation – Test Strategy, BRDs, Defect Reports, etc. Drive collaboration across DevOps, SRE, developers, and stakeholders. Provide testing metrics and improvement plans to senior leadership. Must-Have Qualifications 9–12 years’ total QA experience with 6+ years in ETL/UI/DB testing. 3+ years’ experience in automation testing (Selenium v3+, Playwright) with Python. Strong hands-on with SQL queries (preferably Snowflake/Postgres). Experience with AWS or other cloud platforms. Proven leadership managing at least 2 QA team members. Working experience with Agile methodologies (Scrum/Kanban). Excellent communication and stakeholder management skills. Nice To Have Exposure to DBT, Informatica, or MS Power BI. Healthcare/Life Sciences domain knowledge. Front-end vs back-end validation exposure. Willingness to work flexible hours with international teams. Required Education BCA / B.Sc. (Computer Science) / B.E. / B.Tech / MCA / M.E. / M.Tech Show more Show less

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Work Type: Full-time | Hybrid (2–3 days WFO) Working Hours: Flexible; collaboration with India, US, and Colombia teams Compensation (Yearly): ₹2,200,000 – ₹2,400,000 Team Leadership: Required (Minimum 2 reportees) About The Client We’re hiring for a global digital solutions company known for its expertise in cloud, data engineering, and quality assurance, with a strong focus on innovation and agile delivery across BFSI and Healthcare sectors. About The Role We are hiring for a leadership QA role responsible for managing both manual and automated testing across ETL pipelines, UI, and databases, while leading a team and collaborating cross-functionally across global teams. The role involves implementing best practices in testing, owning quality strategy documentation, and integrating with CI/CD tools. Key Responsibilities Lead and manage a QA team (4+ members) handling ETL, UI, DB, and end-to-end testing. Analyze requirements, create detailed test cases and test data (manual & automated). Validate ETL workflows and transformation logic using advanced SQL (Snowflake/Postgres/MySQL). Create and maintain automation scripts using BDD (Gherkin/Behave, Pytest) in Python. Integrate automation frameworks with CI/CD tools like Jenkins, GitHub, Rundeck. Develop and maintain documentation – Test Strategy, BRDs, Defect Reports, etc. Drive collaboration across DevOps, SRE, developers, and stakeholders. Provide testing metrics and improvement plans to senior leadership. Must-Have Qualifications 9–12 years’ total QA experience with 6+ years in ETL/UI/DB testing. 3+ years’ experience in automation testing (Selenium v3+, Playwright) with Python. Strong hands-on with SQL queries (preferably Snowflake/Postgres). Experience with AWS or other cloud platforms. Proven leadership managing at least 2 QA team members. Working experience with Agile methodologies (Scrum/Kanban). Excellent communication and stakeholder management skills. Nice To Have Exposure to DBT, Informatica, or MS Power BI. Healthcare/Life Sciences domain knowledge. Front-end vs back-end validation exposure. Willingness to work flexible hours with international teams. Required Education BCA / B.Sc. (Computer Science) / B.E. / B.Tech / MCA / M.E. / M.Tech Show more Show less

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Work Type: Full-time | Hybrid (2–3 days WFO) Working Hours: Flexible; collaboration with India, US, and Colombia teams Compensation (Yearly): ₹2,200,000 – ₹2,400,000 Team Leadership: Required (Minimum 2 reportees) About The Client We’re hiring for a global digital solutions company known for its expertise in cloud, data engineering, and quality assurance, with a strong focus on innovation and agile delivery across BFSI and Healthcare sectors. About The Role We are hiring for a leadership QA role responsible for managing both manual and automated testing across ETL pipelines, UI, and databases, while leading a team and collaborating cross-functionally across global teams. The role involves implementing best practices in testing, owning quality strategy documentation, and integrating with CI/CD tools. Key Responsibilities Lead and manage a QA team (4+ members) handling ETL, UI, DB, and end-to-end testing. Analyze requirements, create detailed test cases and test data (manual & automated). Validate ETL workflows and transformation logic using advanced SQL (Snowflake/Postgres/MySQL). Create and maintain automation scripts using BDD (Gherkin/Behave, Pytest) in Python. Integrate automation frameworks with CI/CD tools like Jenkins, GitHub, Rundeck. Develop and maintain documentation – Test Strategy, BRDs, Defect Reports, etc. Drive collaboration across DevOps, SRE, developers, and stakeholders. Provide testing metrics and improvement plans to senior leadership. Must-Have Qualifications 9–12 years’ total QA experience with 6+ years in ETL/UI/DB testing. 3+ years’ experience in automation testing (Selenium v3+, Playwright) with Python. Strong hands-on with SQL queries (preferably Snowflake/Postgres). Experience with AWS or other cloud platforms. Proven leadership managing at least 2 QA team members. Working experience with Agile methodologies (Scrum/Kanban). Excellent communication and stakeholder management skills. Nice To Have Exposure to DBT, Informatica, or MS Power BI. Healthcare/Life Sciences domain knowledge. Front-end vs back-end validation exposure. Willingness to work flexible hours with international teams. Required Education BCA / B.Sc. (Computer Science) / B.E. / B.Tech / MCA / M.E. / M.Tech Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be adept at using ETL tools such as Informatica Glue Databricks and DataProc with coding skills in Python PySpark and SQL. Works independently and demonstrates proficiency in at least one domain related to data with a solid understanding of SCD concepts and data warehousing principles. Outcomes Collaborate closely with data analysts data scientists and other stakeholders to ensure data accessibility quality and security across various data sources.rnDesign develop and maintain data pipelines that collect process and transform large volumes of data from various sources. Implement ETL (Extract Transform Load) processes to facilitate efficient data movement and transformation. Integrate data from multiple sources including databases APIs cloud services and third-party data providers. Establish data quality checks and validation procedures to ensure data accuracy completeness and consistency. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Documentation Create documentation for personal work and review deliverable documents including source-target mappings test cases and results. Configuration Follow configuration processes diligently. Testing Create and conduct unit tests for data pipelines and transformations to ensure data quality and correctness. Validate the accuracy and performance of data processes. Domain Relevance Develop features and components with a solid understanding of the business problems being addressed for the client. Understand data schemas in relation to domain-specific contexts such as EDI formats. Defect Management Raise fix and retest defects in accordance with project standards. Estimation Estimate time effort and resource dependencies for personal work. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Design Understanding Understand design and low-level design (LLD) and link it to requirements and user stories. Certifications Obtain relevant technology certifications to enhance skills and knowledge. Skill Examples Proficiency in SQL Python or other programming languages utilized for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning data processes. Proficiency in querying data warehouses. Knowledge Examples Knowledge Examples Knowledge of various ETL services provided by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow and Azure ADF/ADLF. Understanding of data warehousing principles and practices. Proficiency in SQL for analytics including windowing functions. Familiarity with data schemas and models. Understanding of domain-related data and its implications. Additional Comments Design, develop, and maintain data pipelines and architectures using Azure services. Collaborate with data scientists and analysts to meet data needs. Optimize data systems for performance and reliability. Monitor and troubleshoot data storage and processing issues. Responsibilities Design, develop, and maintain data pipelines and architectures using Azure services. Collaborate with data scientists and analysts to meet data needs. Optimize data systems for performance and reliability. Monitor and troubleshoot data storage and processing issues. Ensure data security and compliance with company policies. Document data solutions and architecture for future reference. Stay updated with Azure data engineering best practices and tools. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience in data engineering. Proficiency in Azure Data Factory, Azure SQL Database, and Azure Databricks. Experience with data modeling and ETL processes. Strong understanding of database management and data warehousing concepts. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Skills Azure Data Factory Azure SQL Database Azure Databricks ETL Data Modeling SQL Python Big Data Technologies Data Warehousing Azure DevOps Skills Azure,Aws,Aws Cloud,Azure Cloud Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Specialist - Data Visualization Our Human Health Digital, Data and Analytics (HHDDA) team is innovating how we understand our patients and their needs. Working cross functionally we are inventing new ways of communicating, measuring, and interacting with our customers and patients leveraging digital, data and analytics. Are you passionate about helping people see and understand data? You will take part in an exciting journey to help transform our organization to be the premier data-driven company. As a Specialist in Data Visualization, you will be focused on designing and developing compelling data visualizations solutions to enable actionable insights & facilitate intuitive information consumption for internal business stakeholders. The ideal candidate will demonstrate competency in building user-centric visuals & dashboards that empower stakeholders with data driven insights & decision-making capability. Responsibilities Develop user-centric scalable visualization and analytical solutions, leveraging complex data sources to create intuitive and insightful dashboards. Apply best practices in visualization design to enhance user experience and business impact. Drive business engagement, collaborating with stakeholders to define key metrics, KPIs, and reporting needs. Facilitate workshops to develop user stories, wireframes, and interactive visualizations. Partner with data engineering, data science, and IT teams to develop scalable business-friendly reporting solutions. Ensure adherence to data governance, privacy, and security best practices. Identify opportunities for automation, streamlining manual reporting processes through modern visualization technologies and self-service analytics enablement. Provide thought leadership, driving knowledge-sharing within the Data & Analytics organization while staying ahead of industry trends to enhance visualization capabilities. Continuously innovative on visualization best practices & technologies by reviewing external resources & marketplace Ensuring timely delivery of high-quality outputs, while creating and maintaining SOPs, KPI libraries, and other essential governance documents. Required Experience And Skills 5+ years of experience in business intelligence, insight generation, business analytics, data visualization, infographics, and interactive visual storytelling Hands-on expertise in BI and visualization tools such as Power BI, MicroStrategy, and ThoughtSpot Solid understanding of data engineering and modeling, including ETL workflows, Dataiku, Databricks, Informatica, and database technologies like Redshift and Snowflake, with programming skills in SQL and Python. Deep knowledge of pharma commercial data sources, including IQVIA, APLD, Claims, Payer, Salesforce, Financials, Veeva, Komodo, IPSOS, and other industry datasets to drive strategic insights. Experience in pharmaceutical commercial analytics, including Field Force Effectiveness, customer engagement, market performance assessment, as well as web, campaign, and digital engagement analytics. Strong problem-solving, communication, and project management skills, with the ability to translate complex data into actionable insights and navigate complex matrix environments efficiently. Strong product management mindset, ensuring analytical solutions are scalable, user-centric, and aligned with business needs, with experience in defining product roadmaps and managing solution lifecycles. Expertise in agile ways of working, including Agile/Scrum methodologies, iterative development, and continuous improvement in data visualization and analytics solutions. Our Human Health Division maintains a “patient first, profits later” ideology. The organization is comprised of sales, marketing, market access, digital analytics and commercial professionals who are passionate about their role in bringing our medicines to our customers worldwide. We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Visualization, Requirements Management, User Experience (UX) Design Preferred Skills Job Posting End Date 04/30/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R334900 Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from Tata Consultancy Services!!! TCS is hiring for Technical Data Analyst . Role : Technical Data Analyst Experience : 10+ Years Location: Mumbai/Bangalore/Hyderabad Responsibilities: 1. Good Knowledge of SAP Technical needed. 2. Knowledge of ETL Tools is required Such as LSMW, Informatica Etc. 3. Minimum 5+ Data Migration Project Experience from Legacy to SAP ERP. 4. Analyze data from various sources, identifying patterns, ensuring data accuracy, and creating reports and dashboards. 5. Should be Good with building queries for Data Extraction. 6. Knowledge of Data Cleansing/Cleansing Burndown is Must. 7. Must Understand Fundamentals of Data Migration to Build Data Plan for Project. 8. Good Knowledge of Cutover Plan and transactional Knowledge. 9. Experts on Different SAP Master Objects Attributes and One with long Lead time consumption on Cleansing. 10. Hands on Experience in De-Dupe Activity and Its Importance. 11. LSMW experience to Load Data. 12. Good Interpersonal Skills. 13. Should have Prior Knowledge of Defect Management Tool Such as IT JIRA, ALM Etc. Requirements: Work/Lead the team in end-to-end implementation & rollouts Expectation Management of Steering committee Creation of Project DATA Plan and Timely execution. Drive Super User for Scorecard approval for Data Load in Quality & production. Work of Defect Closure/Defect Management as Part of Data Defects Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Summary We are seeking a highly experienced and strategic Lead Data Architect with 8+ years of hands-on experience in designing and leading data architecture initiatives. This individual will play a critical role in building scalable, secure, and high-performance data solutions that support enterprise-wide analytics, reporting, and operational systems. The ideal candidate will be both technically proficient and business-savvy, capable of translating complex data needs into innovative architecture designs. Key Responsibilities Design and implement enterprise-wide data architecture to support business intelligence, advanced analytics, and operational data needs. Define and enforce standards for data modeling, integration, quality, and governance. Lead the adoption and integration of modern data platforms (data lakes, data warehouses, streaming, etc.). Develop architecture blueprints, frameworks, and roadmaps aligned with business objectives. Ensure data security, privacy, and regulatory compliance (e.g., GDPR, HIPAA). Collaborate with business, engineering, and analytics teams to deliver high-impact data solutions. Provide mentorship and technical leadership to data engineers and junior architects. Evaluate emerging technologies and provide recommendations for future-state architectures. Required Qualifications 8+ years of experience in data architecture, data engineering, or a similar senior technical role. Bachelor's or Master’s degree in Computer Science, Information Systems, or a related field. Expertise in designing and managing large-scale data systems using cloud platforms (AWS, Azure, or GCP). Strong proficiency in data modeling (relational, dimensional, NoSQL) and modern database systems (e.g., Snowflake, BigQuery, Redshift). Hands-on experience with data integration tools (e.g., Apache NiFi, Talend, Informatica) and orchestration tools (e.g., Airflow). In-depth knowledge of data governance, metadata management, and data cataloging solutions. Experience with real-time and batch data processing frameworks, including streaming technologies like Kafka. Excellent leadership, communication, and cross-functional collaboration skills. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position: ETL Developer Location: Pune Duration: Contract to Hire JD: ETL developer who has worked on any ETL tool like Informatica having understanding of data warehouse concepts with GCP experience, PL/SQL and hands-on experience on BigQuery, Composer, Dataform, Data warehouse, concepts of Data warehouse, someone who can develop PL/SQL scripts, Big Query Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Data Integration: Experience with ETL (Extract, Transform, Load) tools such as Azure Data Factory or Informatica PowerCenter. Ability to integrate data from various siloed systems into CDF using interfaces, extractors, and SDKs Programming and Scripting: Proficiency in programming languages like Python, especially for using the Cognite Python SDK Familiarity with REST APIs for data extraction and integration Database Management: Knowledge of databases such as PostgreSQL and experience with database gateways Understanding of cloud storage solutions and how to extract data from them Data Transformation and Contextualization: Skills in transforming and contextualizing data within CDF Ability to use tools like Apache Spark and Databricks for data analysis and visualization Industrial Data Knowledge: Understanding of industrial data and processes, which is crucial for contextualizing data in CDF Project Management: Ability to manage projects and coordinate with different teams to ensure successful data integration Communication and Collaboration: Strong communication skills to work effectively with cross-functional teams and stakeholders Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less

Posted 2 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

Delhi, Delhi

On-site

Indeed logo

Full time | Work From Office This Position is Currently Open Department / Category: DEVELOPER Listed on Jun 03, 2025 Work Location: NEW DELHI Job Descritpion of Data Bricks Developer 7+ Years Relevant Experience More than 3 years in data integration, pipeline development, and data warehousing, with a strong focus on AWS Databricks. Job Responsibilities: Administer, manage, and optimize the Databricks environment to ensure efficient data processing and pipeline development Perform advanced troubleshooting, query optimization, and performance tuning in a Databricks environment Collaborate with development teams to guide, optimize, and refine data solutions within the Databricks ecosystem Ensure high performance in data handling and processing, including the optimization of Databricks jobs and clusters Engage with and support business teams to deliver data and analytics projects effectively Manage source control systems and utilize Jenkins for continuous integration Actively participate in the entire software development lifecycle, focusing on data integrity and efficiency within Databricks Technical Skills: Proficiency in Databricks platform, management, and optimization Strong experience in AWS Cloud, particularly in data engineering and administration, with expertise in Apache Spark, S3, Athena, Glue, Kafka, Lambda, Redshift, and RDS Proven experience in data engineering performance tuning and analytical understanding in business and program contexts Solid experience in Python development, specifically in PySpark within the AWS Cloud environment, including experience with Terraform Knowledge of databases (Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar) and advanced database querying Experience with source control systems (Git, Bitbucket) and Jenkins for build and continuous integration Understanding of continuous deployment (CI/CD) processes Experience with Airflow and additional Apache Spark knowledge is advantageous Exposure to ETL tools, including Informatica Required Skills for Data Bricks Developer Job AWS Databricks Databases CI/CD Constrol systems Our Hiring Process Screening (HR Round) Technical Round 1 Technical Round 2 Final HR Round

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Contract Duration: 12 Months Location: PAN India Experience: 6 to 10 years Key Responsibilities Assist developers in implementing enterprise data governance and management tools (Data Catalog, Metadata Management, Data Quality, Master Data Management). Conduct testing and validation of data governance tools/processes to ensure compliance with standards. Monitor deployment practices and enforce governance guardrails. Coordinate with stakeholders to ensure requirements meet acceptance criteria related to data governance frameworks, data dictionary, metadata, and access controls. Ensure remediation plans are implemented for data failing governance standards. Support business owners and data stewards in resolving enterprise data issues using industry standards. Communicate changes, issues, and business impacts clearly within the team. (Good to have) Understanding of Agile methodologies (Scrum, Kanban, Lean). Minimum Requirements Bachelor’s/Master’s degree in Computer Science, Software Engineering, or equivalent. 5+ years in data governance, data quality, data migration, or data preparation. Minimum 3 years hands-on experience with Informatica CDGC, Axon EDC, IDQ. Experience working with Agile methodologies. Strong critical thinking and problem-solving skills. Excellent verbal and written communication skills. Ability to work effectively with senior management and cross-functional teams. Good To Have Understanding of Cloud technologies (AWS Data Lake, S3, Glue, Snowflake). Knowledge of BI tools like ThoughtSpot, PowerBI. Experience in Real Estate industry (preferred). Intellectual curiosity and willingness to learn new technical skills. Show more Show less

Posted 2 weeks ago

Apply

8.0 years

1 Lacs

Greater Kolkata Area

On-site

Linkedin logo

JR Number: 1364744 / 704324 Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) Bill Rate: INR 1,60,000/Month Client Interview: Yes Openings: 1 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

hackajob is collaborating with American Express to connect them with exceptional tech professionals for this role. You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers’ digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. Amex offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. How will you make an impact in this role? Build NextGen Data Strategy, Data Virtualization, Data Lakes Warehousing Transform and improve performance of existing reporting & analytics use cases with more efficient and state of the art data engineering solutions. Analytics Development to realize advanced analytics vision and strategy in a scalable, iterative manner. Deliver software that provides superior user experiences, linking customer needs and business drivers together through innovative product engineering. Cultivate an environment of Engineering excellence and continuous improvement, leading changes that drive efficiencies into existing Engineering and delivery processes. Own accountability for all quality aspects and metrics of product portfolio, including system performance, platform availability, operational efficiency, risk management, information security, data management and cost effectiveness. Work with key stakeholders to drive Software solutions that align to strategic roadmaps, prioritized initiatives and strategic Technology directions. Work with peers, staff engineers and staff architects to assimilate new technology and delivery methods into scalable software solutions. Minimum Qualifications Bachelor’s degree in computer science, Computer Science Engineering, or related field required; Advanced Degree preferred. 5+ years of hands-on experience in implementing large data-warehousing projects, strong knowledge of latest NextGen BI & Data Strategy & BI Tools Proven experience in Business Intelligence, Reporting on large datasets, Data Virtualization Tools, Big Data, GCP, JAVA, Microservices Strong systems integration architecture skills and a high degree of technical expertise, ranging across a number of technologies with a proven track record of turning new technologies into business solutions. Should be good in one programming language python/Java. Should have good understanding of data structures. GCP /cloud knowledge has added advantage. PowerBI, Tableau and looker good knowledge and understanding. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. Experience managing in a fast paced, complex, and dynamic global environment. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. Preferred Qualifications Bachelor’s degree in computer science, Computer Science Engineering, or related field required; Advanced Degree preferred. 5+ years of hands-on experience in implementing large data-warehousing projects, strong knowledge of latest NextGen BI & Data Strategy & BI Tools Proven experience in Business Intelligence, Reporting on large datasets, Oracle Business Intelligence (OBIEE), Tableau, MicroStrategy, Data Virtualization Tools, Oracle PL/SQL, Informatica, Other ETL Tools like Talend, Java Should be good in one programming language python/Java. Should be good data structures and reasoning. GCP knowledge has added advantage or cloud knowledge. PowerBI, Tableau and looker good knowledge and understanding. Strong systems integration architecture skills and a high degree of technical expertise, ranging across several technologies with a proven track record of turning new technologies into business solutions. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. Benefits We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less

Posted 2 weeks ago

Apply

6.0 - 11.0 years

22 - 32 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Hiring for Kaygen Global Services Looking for ODI Developer 6+years of exp. Location PAN India-Hybrid mode(Gurgram,Pune,Bangalore,Chennai,Hyderabad,Kolkata,Noida) Skill Required- ODBI, OBIEE, PLSQL, ETL tool (informatica)

Posted 2 weeks ago

Apply

7.0 - 12.0 years

15 - 27 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Minimum 7+ years of working experience in Informatica Cloud IICS components - application integration, data integration, Informatica data quality, and Informatica power center.] Strong functional understanding of RDBMS DWH-BI conceptual knowledge. Demonstrate in-depth understanding of Data Warehousing (DWH) and ETL concepts, ETL loading strategy, ETL Error Handling, Error logging mechanism, standards, and best practices. Ability to gather requirements, solutioning, design and implementation of Informatica pipelines and performance tuning. Proficiency in SQL and experience with various sources and databases like Oracle, SQL Server, MySQL, SAP HANA, and flat files Excellent problem-solving and analytical skills, Strong SQL Skills, and performance tuning capabilities. Effective communication and teamwork skills. Knowledge of SAP Implementation, SAP HANA experience is a plus. Ability to lead a team and manage the deliverables. Job Location: Bengaluru, Hyderabad, Pune, kolkata, Chennai, Gurgaon

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

JOB_POSTING-3-70891 Job Description Role Title : Analyst, Analytics - Data Quality Developer(L08) Company Overview : Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles Organizational Overview Our Analytics organization comprises of data analysts who focus on enabling strategies to enhance customer and partner experience and optimize business performance through data management and development of full stack descriptive to prescriptive analytics solutions using cutting edge technologies thereby enabling business growth. Role Summary/Purpose The Analyst, Analytics - Data Quality Developer (Individual Contributor) role is located in the India Analytics Hub (IAH) as part of Synchrony’s enterprise Data Office. This role will be responsible for the proactive design, implementation, execution, and monitoring of Data Quality process capabilities within Synchrony’s Public and Private cloud and on-prem environments within the Chief Data Office. The Data Quality Developer – Analyst will work within the IT organization to support and participate in build and run activities and environment (e.g. DevOps) for Data Quality. Key Responsibilities Monitor and maintain Data Quality and Data Issue Management operating level agreements in support of data quality rule execution and reporting Assist in performing root cause analysis for data quality issues and data usage challenges, particularly for the workload migration to the public cloud. Recommend, design, implement and refine / remediate data quality specifications within Synchrony’s approved Data Quality platforms Participate in the solution design of data quality and data issue management technical and procedural solutions, including metric reporting Work closely with Technology teams and key stakeholders to ensure the data quality issues are prioritized, analyzed and addressed Regularly communicate the states of data quality issues and progress to key stakeholders Participate in the planning and execution of agile release cycles and iterations Qualifications/Requirements Minimum of 1 years’ experience in data quality management, including implementing data quality rules, data profiling and root cause analysis for data issues, with exposure to cloud environments (AWS, Azure, or Google Cloud) and on-premise infrastructure. Minimum of 1 years’ experience with data quality or data integration tools such as Ab Initio, Informatica, Collibra, Stonebranch or Tableau, gained through hands-on experience or projects. Good communication and collaboration skills, strong analytical thinking and problem-solving abilities, ability to work independently and manage multiple tasks, and attention to detail. Desired Characteristics Broad understanding of banking, credit card, payment solutions, collections, marketing, risk and regulatory & compliance. Experience using data governance and data quality tools such as: Collibra, Ab Initio Express>IT; Ab Initio MetaHub. Proficient in writing / understanding SQL. Experience querying/analyzing data in cloud-based environments (e.g, AWS, Redshift) AWS certifications such as AWS Cloud practitioner, AWS Certified Data Analytics – Specialty Intermediate to advanced MS Office Suite skills including Power Point, Excel, Access, Visio. Strong relationship management and influencing skills to build enduring and productive alliances across matrix organizations. Demonstrated success in managing multiple deliverables concurrently often within aggressive timeframes; ability to cope under time pressure. Experience in partnering with a diverse team composed of staff and consultants located in multiple locations and time zones. Eligibility Criteria: Bachelor’s Degree, preferably in Engineering or Computer Science with more than 1 years’ hands-on Data Management experience or in lieu of a degree with more than 3 years’ experience. Work Timings: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (Formal/Final Formal) or PIP L4 to L7 Employees who have completed 12 months in the organization and 12 months in their current role and level are eligible. L8+ Employees who have completed 18 months in the organization and 12 months in their current role and level are eligible. Grade/Level: 08 Job Family Group Information Technology Show more Show less

Posted 2 weeks ago

Apply

Exploring Informatica Jobs in India

The informatica job market in India is thriving with numerous opportunities for skilled professionals in this field. Companies across various industries are actively hiring informatica experts to manage and optimize their data integration and data quality processes.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for informatica professionals in India varies based on experience and expertise: - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum

Career Path

A typical career progression in the informatica field may include roles such as: - Junior Developer - Informatica Developer - Senior Developer - Informatica Tech Lead - Informatica Architect

Related Skills

In addition to informatica expertise, professionals in this field are often expected to have skills in: - SQL - Data warehousing - ETL tools - Data modeling - Data analysis

Interview Questions

  • What is Informatica and why is it used? (basic)
  • Explain the difference between a connected and unconnected lookup transformation. (medium)
  • How can you improve the performance of a session in Informatica? (medium)
  • What are the various types of cache in Informatica? (medium)
  • How do you handle rejected rows in Informatica? (basic)
  • What is a reusable transformation in Informatica? (basic)
  • Explain the difference between a filter and router transformation in Informatica. (medium)
  • What is a workflow in Informatica? (basic)
  • How do you handle slowly changing dimensions in Informatica? (advanced)
  • What is a mapplet in Informatica? (medium)
  • Explain the difference between an aggregator and joiner transformation in Informatica. (medium)
  • How do you create a mapping parameter in Informatica? (basic)
  • What is a session and a workflow in Informatica? (basic)
  • What is a rank transformation in Informatica and how is it used? (medium)
  • How do you debug a mapping in Informatica? (medium)
  • Explain the difference between static and dynamic cache in Informatica. (advanced)
  • What is a sequence generator transformation in Informatica? (basic)
  • How do you handle null values in Informatica? (basic)
  • Explain the difference between a mapping and mapplet in Informatica. (basic)
  • What are the various types of transformations in Informatica? (basic)
  • How do you implement partitioning in Informatica? (medium)
  • Explain the concept of pushdown optimization in Informatica. (advanced)
  • How do you create a session in Informatica? (basic)
  • What is a source qualifier transformation in Informatica? (basic)
  • How do you handle exceptions in Informatica? (medium)

Closing Remark

As you prepare for informatica job opportunities in India, make sure to enhance your skills, stay updated with the latest trends in data integration, and approach interviews with confidence. With the right knowledge and expertise, you can excel in the informatica field and secure rewarding career opportunities. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies