Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 years
18 Lacs
India
On-site
About the Role: We are seeking talented and detail-oriented Data Engineers with expertise in Informatica MDM to join our fast-growing data engineering team. Depending on your experience, you’ll join as a Software Engineer or Senior Software Engineer, contributing to the design, development, and maintenance of enterprise data management solutions that support our business objectives. As a key player, you will be responsible for building reliable data pipelines, working with master data management, and ensuring data quality, governance, and integration across systems. Responsibilities: Design, develop, and implement data pipelines using ETL tools like Informatica PowerCenter, IICS, etc., and MDM solutions using Informatica MDM . Develop and maintain batch and real-time data integration workflows. Collaborate with data architects, business analysts, and stakeholders to understand data requirements. Perform data profiling, data quality assessments, and master data matching/merging. Implement governance, stewardship, and metadata management practices. Optimize the performance of Informatica MDM Hub, IDD, and associated components. Write complex SQL queries and stored procedures as needed. Senior Software Engineer – Additional Responsibilities: Lead design discussions and code reviews; mentor junior engineers. Architect scalable data integration solutions using Informatica and complementary tools. Drive adoption of best practices in data modeling, governance, and engineering. Work closely with cross-functional teams to shape the data strategy. Required Qualifications: Software Engineer: Bachelor’s degree in Computer Science, Information Systems, or related field. 2–4 years of experience with Informatica MDM (Customer 360, Business Entity Services, Match/Merge rules). Strong SQL and data modeling skills. Familiarity with ETL concepts, REST APIs , and data integration tools. Understanding of data governance and quality frameworks. Senior Software Engineer: Bachelor’s or Master’s in Computer Science, Data Engineering, or related field. 4+ years of experience in Informatica MDM, with at least 2 years in a lead role. Proven track record of designing scalable MDM solutions in large-scale environments. Strong leadership, communication, and stakeholder management skills. Hands-on experience with data lakes, cloud platforms (AWS, Azure, or GCP) , and big data tools is a plus. Preferred Skills (Nice to Have): Experience with other Informatica products (IDQ, PowerCenter). Exposure to cloud MDM platforms or cloud data integration tools. Agile/Scrum development experience. Knowledge of industry-standard data security and compliance practices. Job Type: Full-time Pay: Up to ₹1,853,040.32 per year Benefits: Flexible schedule Health insurance Life insurance Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Application Question(s): What is your notice period? Education: Bachelor's (Preferred) Experience: Informatica: 4 years (Preferred) Location: Noida H.O, Noida, Uttar Pradesh (Preferred) Work Location: In person
Posted 2 weeks ago
3.0 years
0 Lacs
Andhra Pradesh
On-site
Technical Skills Microsoft Purview Expertise (Required) Unified Data Catalog: Experience setting up and configuring the catalog, managing collections, classifications, glossary terms, and metadata curation. Data Quality (DQ): Implementing DQ rules, defining metrics (accuracy, completeness, consistency), and using quality scorecards and reports. Data Map and Scans: Ability to configure sources, schedule scans, manage ingestion, and troubleshoot scan issues. Data Insights and Lineage: Experience visualizing data lineage and interpreting catalog insights. Azure Platform Knowledge (Desirable) Azure Data Factory Azure Synapse Analytics Microsoft Fabric including OneLake Experience 3to 5+ years in data governance or data platform projects, ideally with enterprise clients. 2+ years implementing Microsoft Purview or similar tools (Collibra, Informatica, Alation). Hands-on experience configuring and implementing Microsoft Purview Unified Catalog and Data Quality Experience onboarding multiple data sources (on-prem, cloud). Background in data management, data architecture, or business intelligence is highly beneficial. Certifications Desirable Microsoft Certified Azure Data Engineer Associate Microsoft Certified Azure Solutions Architect Expert About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 2 weeks ago
3.0 years
0 Lacs
Andhra Pradesh, India
On-site
Technical Skills Microsoft Purview Expertise (Required) Unified Data Catalog: Experience setting up and configuring the catalog, managing collections, classifications, glossary terms, and metadata curation. Data Quality (DQ): Implementing DQ rules, defining metrics (accuracy, completeness, consistency), and using quality scorecards and reports. Data Map and Scans: Ability to configure sources, schedule scans, manage ingestion, and troubleshoot scan issues. Data Insights and Lineage: Experience visualizing data lineage and interpreting catalog insights. Azure Platform Knowledge (Desirable) Azure Data Factory Azure Synapse Analytics Microsoft Fabric including OneLake Experience 3to 5+ years in data governance or data platform projects, ideally with enterprise clients. 2+ years implementing Microsoft Purview or similar tools (Collibra, Informatica, Alation). Hands-on experience configuring and implementing Microsoft Purview Unified Catalog and Data Quality Experience onboarding multiple data sources (on-prem, cloud). Background in data management, data architecture, or business intelligence is highly beneficial. Certifications Desirable Microsoft Certified Azure Data Engineer Associate Microsoft Certified Azure Solutions Architect Expert Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Primary Skill – Informatica Master Data Management Total Exp – 5 – 8 years Notice Period – 1 month Job Location – PAN India JD: • Informatica MDM Lead or should have 5-8 years of experience into data management, data mastering, data governance, data quality and data integration activities. • Experience in working with Informatica tools like MDM, ActiveVOS. • Extensive experience using MDM Hub console, PT, Java/J2ee, RDMS, flat files, xml, SQL and Unix. • Expertise on MDM Hub configurations, ActiveVOS workflow implementation, SIF/BES API calls, User Exit implementation, PT configurations. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
P1,C3,STS Design, build data cleansing and imputation, map to a standard data model, transform to satisfy business rules and statistical computations, and validate data content. Develop, modify, and maintain Python and Unix Scripts, and complex SQL Performance tuning of the existing code and avoid bottlenecks and improve performance Build an end-to-end data flow from sources to entirely curated and enhanced data sets. Develop automated Python jobs for ingesting data from various source systems Provide technical expertise in areas of architecture, design, and implementation. Work with team members to create useful reports and dashboards that provide insight, improve/automate processes, or otherwise add value to the team. Write SQL queries for data validation. Design, develop and maintain ETL processess to extract, transform and load Data from various sources into the data warehours Colloborate with data architects, analysts and other stake holders to understand data requirement and ensure quality Optimize and tune ETL processes for performance and scalaiblity Develop and maintain documentation for ETL processes, data flows, and data mappings Monitor and trouble shoot ETL processes to ensure data accuracy and availability Implement data validation and error handling mechanisms Work with large data sets and ensure data integrity and consistency Skills Python, ETL Tools like Informatica, Talend, SSIS or similar SQL, Mysql, Expertise in Oracle, SQL Server and Teradata DeV Ops, GIT Lab Exp in AWS glue or Azure data factory Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description: Job Description Role Purpose The purpose of this role is to provide significant technical expertise in architecture planning and design of the concerned tower (platform, database, middleware, backup etc) as well as managing its day-to-day operations ͏ Do Provide adequate support in architecture planning, migration & installation for new projects in own tower (platform/dbase/ middleware/ backup) Lead the structural/ architectural design of a platform/ middleware/ database/ back up etc. according to various system requirements to ensure a highly scalable and extensible solution Conduct technology capacity planning by reviewing the current and future requirements Utilize and leverage the new features of all underlying technologies to ensure smooth functioning of the installed databases and applications/ platforms, as applicable Strategize & implement disaster recovery plans and create and implement backup and recovery plans Manage the day-to-day operations of the tower Manage day-to-day operations by troubleshooting any issues, conducting root cause analysis (RCA) and developing fixes to avoid similar issues. Plan for and manage upgradations, migration, maintenance, backup, installation and configuration functions for own tower Review the technical performance of own tower and deploy ways to improve efficiency, fine tune performance and reduce performance challenges Develop shift roster for the team to ensure no disruption in the tower Create and update SOPs, Data Responsibility Matrices, operations manuals, daily test plans, data architecture guidance etc. Provide weekly status reports to the client leadership team, internal stakeholders on database activities w.r.t. progress, updates, status, and next steps Leverage technology to develop Service Improvement Plan (SIP) through automation and other initiatives for higher efficiency and effectiveness ͏ Team Management Resourcing Forecast talent requirements as per the current and future business needs Hire adequate and right resources for the team Train direct reportees to make right recruitment and selection decisions Talent Management Ensure 100% compliance to Wipro’s standards of adequate onboarding and training for team members to enhance capability & effectiveness Build an internal talent pool of HiPos and ensure their career progression within the organization Promote diversity in leadership positions Performance Management Set goals for direct reportees, conduct timely performance reviews and appraisals, and give constructive feedback to direct reports. Ensure that organizational programs like Performance Nxt are well understood and that the team is taking the opportunities presented by such programs to their and their levels below Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Proactively challenge the team with larger and enriching projects/ initiatives for the organization or team Exercise employee recognition and appreciation ͏ Deliver NoPerformance ParameterMeasure1Operations of the towerSLA adherence Knowledge management CSAT/ Customer Experience Identification of risk issues and mitigation plans Knowledge management2New projectsTimely delivery Avoid unauthorised changes No formal escalations͏ Mandatory Skills: Informatica Admin . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Greeting from Infosys BPM Ltd, We are hiring for UX with JavaScript, ETL Testing + Python Programming, Automation Testing with Java, Selenium, BDD, Cucumber, ETL DB Testing, ETL Testing Automation skills. Please walk-in for interview on 10th and 11th June 2025 at Pune location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-215162 Interview details Interview Date: 10th and 11th June 2025 Interview Time: 10 AM till 1 PM Interview Venue: Pune:: Hinjewadi Phase 1 Infosys BPM Limited, Plot No. 1, Building B1, Ground floor, Hinjewadi Rajiv Gandhi Infotech Park, Hinjewadi Phase 1, Pune, Maharashtra-411057 Please find below Job Description for your reference: Work from Office*** Min 2 years of experience on project is mandate*** Job Description: UX with JavaScript Technical Design tools (e.g., Photoshop, XD, Figma), strong knowledge of HTML, CSS, JavaScript, and experience with SharePoint customization. Experience with Wireframe, Prototype, intuitive, responsive design, Documentation, Able to Lead the Team Nice to have Understanding of SharePoint Framework (SPFx) and modern SharePoint development. Job Description: ETL Testing + Python Programming Experience in Data Migration Testing (ETL Testing), Manual & Automation with Python Programming. Strong on writing complex SQLs for data migration validations. Work experience with Agile Scrum Methodology Functional Testing- UI Test Automation using Selenium, Java Financial domain experience Good to have AWS knowledge Job Description: Automation Testing with Java, Selenium, BDD, Cucumber Hands on exp in Automation. Java, Selenium, BDD , Cucumber expertise is mandatory. Banking Domian Experience is good. Financial domain experience Automation Talent with TOSCA skills, Payment domain skills is preferable. Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team. Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
As the hands-on Staff Database Engineer for Clearwater Analytics, you will play a crucial role in designs, develops, and maintains data systems and architectures to collect, store, process, and analyse large volumes of data. You will be building data pipelines, optimize data models, and ensure data quality and security. You will be collaborating with cross-functional teams to meet business objectives and stay updated with emerging technologies and industry best practices. Responsibilities and Duties: Extensive experience with Snowflake, including proficiency in Snow SQL CLI, Snowpipe, creating custom functions, developing Snowflake stored procedures, schema modeling, and performance tuning. In-depth expertise in Snowflake data modeling and ELT processes using Snowflake SQL, as well as implementing complex stored procedures and leveraging Snowflake Task Orchestration for advanced data workflows. Strong background in DBT CLI, DBT Cloud, and GitHub version control, with the ability to design and develop complex SQL processes and ELT pipelines. Take a hands-on approach in designing, developing, and supporting low-latency data pipelines, prioritizing data quality, accuracy, reliability, and efficiency Advance SQL knowledge and hands-on experience in complex query writing using Analytical functions, Troubleshooting, problem-solving, and performance tuning of SQL queries accessing data warehouse as well as Strong knowledge of stored procedures. Collaborate closely with cross-functional teams including, Enterprise Architects, Business Analysts, Product Owners, Solution Architects actively engaging in gathering comprehensive business requirements and translate these requirements into scalable data cloud and Enterprise Data Warehouse (EDW) solutions that precisely align with organizational needs Play a hands-on role in conducting data modeling, ETL (Extract, Transform, Load) development, and data integration processes across all Snowflake environments. Develop and implement comprehensive data governance policies and procedures to fortify the accuracy, security, and compliance of Snowflake data assets across all environments. Capable of independently conceptualizing and developing innovative ETL and reporting solutions, driving them through to successful completion. Create comprehensive documentation for database objects and structures to ensure clarity and consistency. Troubleshoot and resolve production support issues post-deployment, providing effective solutions as needed. Devise and sustain comprehensive data dictionaries, metadata repositories, and documentation to bolster governance and facilitate usage across all Snowflake environments. Remain abreast of the latest industry trends and best practices, actively sharing knowledge and encouraging the team to continuously enhance their skills. Continuously monitor the performance and usage metrics of Snowflake database and Enterprise Data Warehouse (EDW), conducting frequent performance reviews and implementing targeted optimization efforts Skills Required: Familiarity with big data, data warehouse architecture and design principles Strong understanding of database management systems, data modeling techniques, data profiling and data cleansing techniques Expertise in Snowflake architecture, administration, and performance tuning. Experience with Snowflake security configurations and access controls. Knowledge of Snowflake's data sharing and replication features. Proficiency in SQL for data querying, manipulation, and analysis. Experience with ETL (Extract, Transform, Load) tools and processes. Ability to translate business requirements into scalable EDW solutions. Streaming Technologies like AWS Kinesis Qualifications: Bachelor's degree in Computer Science, Information Systems, or a related field. 10 years of hands-on experience in data warehousing, ETL development, and data modeling, with a strong track record of designing and implementing scalable Enterprise Data Warehouse (EDW) solutions. 3+ years of extensive hands-on experience with Snowflake, demonstrating expertise in leveraging its capabilities. Proficiency in SQL and deep knowledge of various database management systems (e.g., Snowflake, Azure, Redshift, Teradata, Oracle, SQL Server). Experience utilizing ETL tools and technologies such as DBT, Informatica ,SSIS, Talend Expertise in data modeling techniques, with a focus on dimensional modeling and star schema design. Familiarity with data governance principles and adeptness in implementing security best practices. Excellent problem-solving and troubleshooting abilities, coupled with a proven track record of diagnosing and resolving complex database issues. Demonstrated leadership and team management skills, with the ability to lead by example and inspire others to strive for excellence. Experience in the Finance industry will be a significant advantage Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Engineer - 6months to 2yrs exp 2023-2024 graduates ONLY Company Description Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description We are seeking a Data Engineer with a strong background in data engineering. This role involves managing system requirements, design, development, integration, quality assurance, implementation, and maintenance of corporate applications. Ø Work with product owners, business stakeholders and internal teams to understand business requirements and desired business outcomes. Ø Assist in scoping and designing analytic data assets, implementing modelled attributes and contributing to brainstorming sessions. Ø Build and maintain a robust data engineering process to develop and implement self-serve data and tools for Visa’s product management teams and data scientists. Ø Find opportunities to create, automate and scale repeatable analyses or build self-service tools for business users. Ø Execute data engineering projects ranging from small to large either individually or as part of a project team. Ø Set the benchmark in the team for good data engineering practices and assist leads and architects in solution design. Ø Exhibit a passion for optimizing existing solutions and making incremental improvements. This is a hybrid position. Expectation of days in office will be confirmed by your hiring manager. Qualifications Basic Qualification -Bachelors degree, OR 3+ years of relevant work experience Preferred Qualification -Minimum of 1 years’ experience in building data engineering pipelines. -Design and coding skills with Big Data technologies like Hadoop, Spark, Hive and Map reduce. -Mastery in Pyspark or Scala. -Expertise in any programming like Java or Python. Knowing OOP concepts like inheritance, polymorphism and implementing Design Patterns in programming is needed. -Experience with cloud platforms like AWS, GCP, or Azure is good to have. -Excellent problem-solving skills and ability to think critically. -Experience with any one ETL tool like Informatica, SSIS, Pentaho or Azure Data Factory. -Knowledge of successful design, and development of data driven real time and batch systems. -Experience in data warehousing and an expert in any one of the RDBMS like SQL Server, Oracle, etc. -Nice to have reporting skills on PowerBI/Tableau/QlikView. -Strong understanding of cloud architecture and service offerings including compute, storage, databases, networking, AI, and ML. -Passionate about delivering zero defect code that meets or exceeds the proposed defect SLA and have high sense of accountability for quality and timelines on deliverables. -Experience developing as part of Agile/Scrum team is preferred and hands on with Jira. -Understanding basic CI/ CD functionality and Git concepts is must. Additional information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role : PySpark Developer Locations : Hyderabad & Bangalore Work Mode : Hybrid Interview Mode : Virtual (2 Rounds) Type : Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
India
On-site
Title: Data Support Engineer Duration : 6 months Contract Skill : Data engineering and Support activities, SQL,Python Mandatory Skills: 3+ years of experience in Data engineering and Support activities . Strong expertise in SQL (advanced queries, optimization, indexing) . Proficiency in Python, Shell scripting for automation. Knowledge of Data warehousing concepts (star schema, partitioning, etc.) . Good understanding of monitoring tools . Good to have understanding, technical knowledge on ETL tools (Informatica) and Cloud data platforms (Snowflake, Databricks). Desired Skills: Strong analytical and problem-solving abilities. Excellent communication and collaboration skills. Ability to work in a fast-paced environment to meet project timelines Bachelor’s degree in Computer science, IT, or related field . Relevant certifications ( Azure Data Engineer, etc. ) will be a plus Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Greetings, We have immediate opportunity for ETL Developer – 5-7 years Synechron– Mumbai Job Role: ETL Developer Job Location: Mumbai About Synechron We began life in 2001 as a small, self-funded team of technology specialists. Since then, we’ve grown our organization to 14,500+ people, across 58 offices, in 21 countries, in key global markets. Innovative tech solutions for business We're now a leading global digital consulting firm, providing innovative technology solutions for business. As a trusted partner, we're always at the forefront of change as we lead digital optimization and modernization journeys for our clients. Customized end-to-end solutions Our expertise in AI, Consulting, Data, Digital, Cloud & DevOps and Software Engineering, delivers customized, end-to-end solutions that drive business value and growth. Job Summary: Responsibilities: • Perform business/data analysis to investigate into various business problems and propose the solution working closely with clients and team • Analyze Report Requirements, perform gap analysis, understand existing systems and data flows, model the required changes, source, enrich and publish to Axiom for final reporting • Detailed requirements analysis with End-Users to understand business (reporting) needs • Performance and scalability optimization to support large scale deployments • Define new features in conjunction with product management, and provide specifications • Ensure quality and completeness of the final product through unit testing, documentation, and maintenance as appropriate. Technical Skills • 3 years of Experience in Database development (SQL/Data Modelling/Stored Procedures) • 3 years of ETL Programming/Data Engineering (Tools like Informatica not necessary) • Job scheduler tools: Autosys • UNIX Preferred Skills • Axiom CV10 experience • Understanding of U.S. risk and regulatory requirements • DevOps Tooling • Experience with Java/Scala/Python and Spark Framework • Exposure to BFSI and finance industry Non-Technical Skills: • High interest in Finance business & data analysis required. Interest in Accounting preferred. • Eager to learn business in Finance such as financial products, regulatory requirement, and accounting, working closely with internal clients • Ownership/Accountability and desire to take on an expanded role over time. • Ability to multi-task and work against deadlines/priorities • Eager to work with new technologies and apply them towards enterprise-level data solutions • Capable of working with a global team and thriving in a team development environment Advanced problem-solving skills and the ability to deal with real world business issues. For more information on the company, please visit our website or LinkedIn community. If you find this this opportunity interesting kindly share your updated profile on Mandar.Jadhav@synechron.com Current CTC- Expected CTC- Notice period- Current Location- Ready to relocate to Mumbai- If you had gone through any interviews in Synechron before? If Yes when Regards, Mandar Jadhav Mandar.Jadhav@synechron.com Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Greater Delhi Area
On-site
About the Company - About- R Systems International Limited (https://www.rsystems.com/about-us/factsheet/) R Systems is a Blackstone portfolio Company founded in 1993, Headquartered at El Dorado Hills, California, United States (USA) and offshore delivery centers located at Noida, Pune and Chennai. R Systems International Limited is listed publically at NSE and BSE with current share price at around RS 500+. It is a leading digital product engineering company that designs and builds next-gen products, platforms, and digital experiences empowering clients across various industries to overcome digital barriers, put their customers first, and achieve higher revenues as well as operational efficiency. We constantly innovate and bring fresh perspectives to harness the power of the latest technologies like cloud, automation, AI, ML, analytics, Mixed Reality etc. Role- Data Engineer Responsibilities - Strong expertise in ETL processes and tools like Informatica, with experience in designing and maintaining data pipelines. Proficiency in Pyspark for big data processing and distributed computing. Advanced SQL skills and experience with PL/SQL for managing and querying large datasets in relational databases. Exposure to cloud platforms like AWS and GCP for data storage, data processing, and deployment of ETL solutions. Experience in data integration, transformation, and loading between different systems and databases. Familiarity with data modeling, data warehousing, and database performance tuning. Understanding of Agile methodologies and participation in sprint-based development cycles. Qualifications - BE/Btech Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana
On-site
Senior ETL and Backend Developer (Salesforce) Hyderabad, India; Ahmedabad, India Information Technology 316835 Job Description About The Role: Grade Level (for internal use): 10 Title : Senior ETL and Backend Developer (Salesforce) Job Location : Hyderabad, Ahmedabad, Gurgaon, Virtual-India The Team: We are seeking a skilled Senior ETL and Backend Developer with extensive experience in Informatica and Salesforce. The ideal candidate will be responsible for designing, developing, and maintaining ETL processes and backend systems to ensure seamless data integration and management. The team works in a challenging environment that gives ample opportunities to use innovative ideas to solve complex problems. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. The Impact: You will be making significant contribution in building solutions for the Web applications using new front-end technologies & Micro services. The work you do will deliver products to build solutions for S&P Global Commodity Insights customers. Responsibilities: ETL Development: Design, develop, and maintain ETL processes using Informatica PowerCenter and other ETL tools. Data Integration: Integrate data from various sources, including databases, APIs, flat files, and cloud storage, into data warehouses or data lakes. Backend Development: Develop and maintain backend systems using relevant programming languages and frameworks. Salesforce Integration: Implement and manage data integration between Salesforce and other systems. Performance Tuning: Optimize ETL processes and backend systems for speed and efficiency. Data Quality: Ensure data quality and integrity through rigorous testing and validation. Monitoring and Maintenance: Continuously monitor ETL processes and backend systems for errors or performance issues and make necessary adjustments. Collaboration: Work closely with data architects, data analysts, and business stakeholders to understand data requirements and deliver solutions. Qualifications: Basic Qualifications: Bachelor's /Master’s Degree in Computer Science, Information Systems or equivalent. A minimum of 8+ years of experience in software engineering & Architecture. A minimum 5+ years of experience in ETL development, backend development, and data integration. A minimum of 3+ years of Salesforce development, administration/Integration. Proficiency in Informatica PowerCenter and other ETL tools. Strong knowledge of SQL and database management systems (e.g., Oracle, SQL Server). Experience with Salesforce integration and administration. Proficiency in backend development languages (e.g., Java, Python, C#). Familiarity with cloud platforms (e.g., AWS, Azure) is a plus. Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a team. Nice to have – GenAI, Java, Spring boot, Knockout JS, requireJS, Node.js, Lodash, Typescript, VSTest/ MSTest/ nUnit. Preferred Qualifications: Proficient with software development lifecycle (SDLC) methodologies like SAFe, Agile, Test- driven development. Experience with other ETL tools and data integration platforms. Informatica Certified Professional Salesforce Certified Administrator or Developer Knowledge of back-end technologies such as C#/.NET, Java or Python. Excellent problem solving, analytical and technical troubleshooting skills. Able to work well individually and with a team. Good work ethic, self-starter, and results oriented. Excellent communication skills are essential, with strong verbal and writing proficiencies. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316835 Posted On: 2025-06-03 Location: Hyderabad, Telangana, India
Posted 2 weeks ago
0.0 - 4.0 years
0 Lacs
Noida H.O , Noida, Uttar Pradesh
On-site
About the Role: We are seeking talented and detail-oriented Data Engineers with expertise in Informatica MDM to join our fast-growing data engineering team. Depending on your experience, you’ll join as a Software Engineer or Senior Software Engineer, contributing to the design, development, and maintenance of enterprise data management solutions that support our business objectives. As a key player, you will be responsible for building reliable data pipelines, working with master data management, and ensuring data quality, governance, and integration across systems. Responsibilities: Design, develop, and implement data pipelines using ETL tools like Informatica PowerCenter, IICS, etc., and MDM solutions using Informatica MDM . Develop and maintain batch and real-time data integration workflows. Collaborate with data architects, business analysts, and stakeholders to understand data requirements. Perform data profiling, data quality assessments, and master data matching/merging. Implement governance, stewardship, and metadata management practices. Optimize the performance of Informatica MDM Hub, IDD, and associated components. Write complex SQL queries and stored procedures as needed. Senior Software Engineer – Additional Responsibilities: Lead design discussions and code reviews; mentor junior engineers. Architect scalable data integration solutions using Informatica and complementary tools. Drive adoption of best practices in data modeling, governance, and engineering. Work closely with cross-functional teams to shape the data strategy. Required Qualifications: Software Engineer: Bachelor’s degree in Computer Science, Information Systems, or related field. 2–4 years of experience with Informatica MDM (Customer 360, Business Entity Services, Match/Merge rules). Strong SQL and data modeling skills. Familiarity with ETL concepts, REST APIs , and data integration tools. Understanding of data governance and quality frameworks. Senior Software Engineer: Bachelor’s or Master’s in Computer Science, Data Engineering, or related field. 4+ years of experience in Informatica MDM, with at least 2 years in a lead role. Proven track record of designing scalable MDM solutions in large-scale environments. Strong leadership, communication, and stakeholder management skills. Hands-on experience with data lakes, cloud platforms (AWS, Azure, or GCP) , and big data tools is a plus. Preferred Skills (Nice to Have): Experience with other Informatica products (IDQ, PowerCenter). Exposure to cloud MDM platforms or cloud data integration tools. Agile/Scrum development experience. Knowledge of industry-standard data security and compliance practices. Job Type: Full-time Pay: Up to ₹1,853,040.32 per year Benefits: Flexible schedule Health insurance Life insurance Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Application Question(s): What is your notice period? Education: Bachelor's (Preferred) Experience: Informatica: 4 years (Preferred) Location: Noida H.O, Noida, Uttar Pradesh (Preferred) Work Location: In person
Posted 2 weeks ago
7.0 years
0 Lacs
Greater Kolkata Area
On-site
About The Company Veersa is a healthtech company that leverages emerging technology and data science to solve business problems in the US healthcare industry. Veersa has established a niche in serving small and medium entities in the US healthcare space through its tech frameworks, platforms, and tech accelerators. Veersa is known for providing innovative solutions using technology and data science to its client base and is the preferred innovation partner to its clients. Veersas rich technology expertise manifests in the various tech accelerators and frameworks developed in-house to assist in rapid solutions delivery and implementations. Its end-to-end data ingestion, curation, transformation, and augmentation framework has helped several clients quickly derive business insights and monetize data assets. Veersa teams work across all emerging technology areas such as AI/ML, IoT, and Blockchain and using tech stacks as MEAN, MERN, PYTHON, GoLang, ROR, and backend such as Java Springboot, NodeJs, and using databases as PostgreSQL, MS SQL, MySQL, Oracle on AWS and Azure cloud using serverless architecture. Veersa has two major business lines Veersalabs : an In-house R&D and product development platform and Veersa tech consulting : Technical solutions delivery for clients. Veersas customer base includes large US Healthcare software vendors, Pharmacy chains, Payers, providers, and Hospital chains. Though Veersas focus geography is North America, Veersa also provides product engineering expertise to a few clients in Australia and Singapore. About The Role We are seeking a highly skilled and experienced Senior/Lead Data Engineer to join our growing Data Engineering Team. In this critical role, you will design, architect, and develop cutting-edge multi-tenant SaaS data solutions hosted on Azure Cloud. Your work will focus on delivering robust, scalable, and high-performance data pipelines and integrations that support our enterprise provider and payer data ecosystem. This role is ideal for someone with deep experience in ETL/ELT processes, data warehousing principles, and real-time and batch data integrations. As a senior member of the team, you will also be expected to mentor and guide junior engineers, help define best practices, and contribute to the overall data strategy. Key Responsibilities Architect and implement scalable data integration and data pipeline solutions using Azure cloud services. Design, develop, and maintain ETL/ELT processes, including data extraction, transformation, loading, and quality checks. Collaborate with business and technical stakeholders to understand data requirements and translate them into technical solutions. Develop and manage data flows, data mappings, and data quality & validation rules across multiple tenants and systems. Implement best practices for data modeling, metadata management, and data governance. Configure, maintain, and monitor integration jobs to ensure high availability and performance. Lead code reviews, mentor data engineers, and help shape engineering culture and standards. Stay current with emerging technologies and recommend tools or processes to improve the team's Qualifications : Bachelors or Masters degree in Computer Science, Information Systems, or related field. 7+ years of experience in data engineering, with a strong focus on Azure-based solutions. Proven experience in designing and implementing real-time and batch data integrations. Hands-on experience with Azure Data Factory, Azure Data Lake, Azure Synapse, Databricks, or similar technologies. Strong understanding of data warehousing principles, ETL/ELT methodologies, and data pipeline architecture. Proficiency in SQL, Python, or Scala for data processing and pipeline development. Familiarity with data quality, metadata management, and data validation frameworks. Strong problem-solving skills and the ability to communicate complex technical concepts clearly. Preferred Qualifications Experience with multi-tenant SaaS data solutions. Background in healthcare data, especially provider and payer ecosystems. Familiarity with DevOps practices, CI/CD pipelines, and version control systems (e.g., Git). Experience mentoring and coaching other engineers in technical and architectural decision-making. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Ford Credit's Tech Team in India is actively seeking a highly skilled and experienced Senior Test Data Management Engineer to join our team. In this crucial role, you will be responsible for defining, implementing, and managing test data strategies and solutions specifically for our complex financial applications. Leveraging your deep expertise with IBM Optim, you will ensure that our testing environments are populated with realistic, compliant, and high-quality data, enabling robust and efficient testing across various phases. You'll play a key role in safeguarding sensitive financial data through effective masking and subsetting techniques while supporting our development and QA teams This is a senior-level position requiring deep technical expertise, strategic thinking, and the ability to mentor others and drive TDM excellence throughout the organization. Responsibilities Test Data Management Engineer - Role & Responsibilities : TDM Strategy & Design: Define, develop, and implement comprehensive Test Data Management (TDM) strategies, frameworks, and processes tailored for financial applications. Analyze complex application data models and relationships to design effective data subsetting, masking, and generation solutions using IBM Optim. Collaborate with stakeholders (QA, Development, Business Analysts, Compliance) to understand test data requirements and translate them into technical TDM solutions. TDM Solution Implementation & Management: Design, configure, and execute TDM processes using relevant TDM tools (subsetting, masking, generation, synthetic data creation, data refresh). Implement and manage data masking/obfuscation techniques to comply with data privacy regulations (e.g., GDPR, CCPA, etc.) and internal policies for sensitive financial data. Manage the lifecycle of test data environments, including planning refresh cycles, executing data provisioning requests, and managing data retention. Troubleshoot and resolve complex test data-related issues across different environments. Data Analysis & Provisioning: Perform in-depth data analysis to identify critical data elements, sensitive data, and complex data relationships required for various testing cycles (functional, performance, UAT). Provision timely and relevant test data sets to different testing environments based on project needs. Troubleshoot and resolve test data-related issues, ensuring data integrity and quality. Compliance & Security: Ensure all TDM activities adhere strictly to internal data governance policies and external financial regulations regarding data privacy and security. Work closely with Compliance, Security, and Audit teams to validate TDM processes and controls. Performance & Automation: Optimize IBM Optim processes and underlying database interactions for performance and efficiency. Identify opportunities for automation in test data provisioning and management workflows. Collaboration & Business Alignment: Establish and promote TDM best practices, standards, and guidelines across the organization. Create and maintain detailed documentation for TDM processes, tools, and environments. Work closely with Database Administrators (DBAs) to manage test data storage, performance, and access. Collaborate with DevOps engineers to integrate TDM processes into CI/CD pipelines where applicable. Collaborate closely with Product Owners, Business Analysts, Software Engineers, to understand complex financial requirements, define precise testing criteria, and prioritize automation efforts. Qualifications Must Have: 10+ years of overall experience in IT, with a strong focus on Quality Assurance, Data Management, or Software Engineering. 6+ years of dedicated experience in Test Data Management (TDM). Proven experience implementing and managing TDM solutions for complex enterprise applications, preferably in the financial services industry. Strong Hands-on experience with industry-standard TDM tools like IBM Optim and Opensource tools Experience working in highly regulated environments with a strong understanding of data privacy and compliance challenges in finance. Strong SQL skills and experience working with various relational databases (e.g., Oracle, SQL Server, DB2, PostgreSQL , BQ, etc). Solid understanding of data modeling concepts and database structures. Proficiency in data masking, subsetting, and synthetic data generation techniques. Experience with scripting languages (e.g., Python, Shell, Perl) for automation and data manipulation. Experience with RBAC and have worked with Infra teams to achieve CI/CD automation to produce masked test data from production on demand. Familiarity with Linux/Unix command line. Solid understanding of data refreshers process Solid understanding of financial industry data structures, workflows, and testing challenges (e.g., trading, payments, banking, accounting, regulatory reporting). In-depth knowledge of relevant data privacy regulations (e.g., GDPR, CCPA, etc.) and their impact on test data handling. Excellent analytical and problem-solving skills with the ability to tackle complex data challenges. Strong communication and interpersonal skills, with the ability to explain technical concepts to non-technical stakeholders. Must have experience working with US public companies, with a strong understanding of security processes and how to apply them to Test Data Management (TDM) to ensure compliance with regulations. Ability to work effectively both independently and as a leader or contributor within a team. Proven ability to mentor junior team members and drive adoption of best practices. Nice to Have: Experience with specific TDM tools like, Informatica TDM, Alteryx , PyETL,Deequ, Google DVT, etc Experience with data virtualization tools. Experience with AI for Synthetic Data Generation. Experience with cloud platforms (AWS, Azure, GCP) and cloud database services. Experience integrating TDM processes into CI/CD pipelines. Familiarity with performance testing concepts and data needs. Relevant certifications in TDM, databases, or cloud technologies. Preferred Qualification: Bachelor’s Degree in Computer Science, Engineering or equivalent work experience Min of 6+ Test Data Management Engineer Show more Show less
Posted 2 weeks ago
15.0 - 22.0 years
40 - 60 Lacs
Gurugram, Bengaluru
Hybrid
Role & responsibilities The AML platform owner is responsible for the overall strategy, implementation, and maintenance of Sun Lifes AML system and integration with source/target systems to enable e2e data flow. Establishing & developing a COE (Team, Processes, Governance, etc.) to deliver AML capabilities to our Corp IT business to start with and expanding to other business units in the future. Responsible for End-to-end management of Platform (Symphony.ai AML & other data integration stack such as Informatica ETL) Responsible for understanding product needs, developing capabilities using the AML tool stack, establishing and maintaining a robust operating model, and continuing to build efficiency and agility across the teams in delivering business value. Overseeing the maintenance and ongoing development of the AML system, including monitoring its performance and adjusting as necessary. Managing the data governance processes for the AML system, including ensuring that data is accurate, complete, and consistent within and across the AML system ecosystem Managing the data quality and data security for the AML system, including implementing policies and procedures to protect sensitive data. Working with other departments and stakeholders to understand their needs and how the AML system can support them. Communicating with senior management and stakeholders to report on the status and progress of the AML system. Managing the budget and resources for the AML system, including ensuring that the system is cost-effective and efficient. Managing vendors and other external partners involved in the AML system. Keeping abreast of industry trends and best practices in AML, and incorporating them into the organization's AML strategy and system Be accountable for delivery of platform OKRs and communicate progress to stakeholders Participate in external industry events to build your brand Play an important role in keeping the team motivated, engaged and find new opportunities that results in better retention Mentor engineers/ technical developers/ designers/administrators to be future ready Preferred candidate profile 15+ years of platform/product delivery management experience with 5+ yrs. in senior leadership role (s) 10+ yrs. of experience in leading Data Management, Data & Analytics, BI, Data virtualization, Data Quality & Governance, and AML implementation Experience of working with Industry standard Insurance Data Model (IIW, ACORD) /Data delivery leader background along with minimum 5+ Yrs. of experience in Insurance domain (Domain Model & Semantic Model) Strong background in AML concepts including Customer Risk Management, Name Screening, Transaction Monitoring, Case Management, and Regulatory Reporting Strong experience in using AML tool stack Symphony.ai experience is a plus Strong experience in ETL and BI solution development and tool stack Informatica ETL experience is a plus Good understanding of AI concepts and experience in developing AI solutions Good understanding of Data Quality and Data Governance practices including tool stack such as Informatica Data Quality, and Collibra is a plus Good Understanding of Insurance Data Architecture & Data Flows in all stages (Prospecting, FNA, SI, Application submission, UW and Policy issue) with Data modelling standards and its operational reporting / Analytics Strong in Data Analysis, SQL, T-SQL, PL/SQL and able to explore NoSQL Data Modelling techniques Good experience with major cloud platforms and data tools in cloud including but not limited to AWS, Microsoft Azure, Informatica tools, Kafka, CDC, Tableau, Collibra, Data virtualization tools Familiarity with agile methodologies and data factory operations processes including usage of tools like confluence, Jira and Miro Prior experience of working on SI partner selection process, RFP, estimation, Planning, implementation approach and working with Vendor for review & signoff for all deliverables in each milestone Strong leadership and communication skills: Should be able to lead and manage a team, as well as communicate effectively with stakeholders and senior management. Strong knowledge of industry standards and regulations: A data platform owner should have knowledge of industry standards and regulations related to data management, such as HIPAA, PCI-DSS, and GDPR. Experience articulating and presenting visions, thinking beyond current state and driving collaboration to formulate the future of our app Proven knowledge of working in financial services, preferably, insurance space
Posted 2 weeks ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 2 - 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role Business Unit Overview: BlackRock’s US Wealth Advisory business (“ USWA ”) manages the firm’s relationships with US retail financial services firms and their advisors, who ultimately serve end investors. Representing a full suite of strategies – from iShares ETFs and mutual funds to SMAs, model portfolios, alternatives, portfolio solutions, and sub-advisory programs – USWA’s mandate is to deliver “One BlackRock” to our clients. USWA COO Team is responsible for leading a range of strategic initiatives and for managing the internal planning for BlackRock’s US retail business. Our group, reporting to the Head of US Retail, forms the “connective tissue” of BlackRock’s US retail business, coordinating functional areas within the business and interactions with the broader BlackRock organization, including business analytics and product. Job Purpose/Background The USWA Client Data Center of Excellence (COE) is responsible for maintaining and enhancing our robust data ecosystem, proactively and reactively addressing data process and quality issues, and serve as a key connection point between USWA and the Client Platform / Global Client Business (GCB) for everything related to client data, data quality, and ETL processes. Key Responsibilities Design, develop, and maintain ETL processes and workflows across 3rd-party data feeds, CRM data exhaust, and industry data / insights. Optimize ETL processes for performance and scalability. Maintain and update client data repositories to ensure accuracy and completeness, focusing on timely onboarding, QA, and automation of priority 3rd-party data feeds Collaborate with USWA Client Data Management, data scientists, and business stakeholders to understand data requirements, improve data quality, and reduce time to market when new data feeds are received. Partner with the central Aladdin engineering team on platform-level engagements (e.g., Global Client Business Client Data Platform) Ensure the seamless flow of data between internal and external systems. Fix and resolve ETL job failures and data discrepancies. Document ETL processes and maintain technical specifications. Work with data architects to model data and ingest into data warehouses (e.g., Client Data Platform) Engage in code reviews and follow development standard methodologies. Stay updated with emerging ETL technologies and methodologies. Qualifications Bachelors or Master’s Degree in Computer Science, Engineering or a related field. Eligibility Criteria 4+ years of experience in ETL development and data warehousing. Proficiency in ETL tools such as Informatica, Talend, or SSIS. Experience with building ETL processes for cloud-based data warehousing solutions (e.g., Snowflake). In-depth working knowledge of Python programming language including libraries for data structures, reporting templates etc. Extensive knowledge of writing Python data processing scripts and executing multiple scripts via batch processing. Proficiency in programming languages like Python, Java, or Scala. Experience with database management systems like SQL Server, Oracle, or MySQL. Knowledge of data modeling and data architecture principles. Excellent problem-solving and analytical skills. Ability to work in a fast-paced and dynamic environment. Strong communication skills and ability to work as part of a team. Experience in the asset management or financial services sector is a plus. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: GCP Data Engineer – Data Migration & Transformation Location: Chennai Experience Level: 4+ Years Key Responsibilities: Design and build robust, scalable data pipelines and architectures on GCP, especially BigQuery. Migrate and transform large-scale data systems and datasets to GCP with a focus on performance, scalability, and reliability. Automate data lineage extraction and ensure data integrity across multiple systems and platforms. Collaborate with architects and stakeholders to implement GCP-native and 3rd-party tools for data ingestion, integration, and transformation. Develop and optimize complex SQL queries in BigQuery for data analysis and transformation. Operationalize data pipelines using tools like Apache Airflow (Cloud Composer), DataFlow, and Pub/Sub. Enable machine learning capabilities through well-structured, ML-friendly data pipelines. Participate in Agile processes and contribute to technical design discussions, code reviews, and documentation. Required Skills & Experience: 5+ years of experience in Data Warehousing , Data Engineering , or similar roles. Minimum 2 years of hands-on experience working with GCP BigQuery . Proficiency in Python , SQL , Apache Airflow , and GCP services including BigQuery, DataFlow, Cloud Composer, Pub/Sub, and Cloud Functions. Experience with data pipeline automation , data modeling, and building reusable data products. Solid understanding of data lineage , metadata integration , and data cataloging (preferably GCP Data Catalog and Informatica EDC). Proven ability to analyze complex datasets and derive actionable insights. Demonstrated experience building and deploying analytics platforms on cloud environments (GCP preferred). Preferred Skills: Strong analytical and problem-solving capabilities. Exposure to machine learning pipeline architecture and model deployment workflows. Excellent communication skills and ability to work collaboratively with cross-functional teams. Familiarity with Agile methodologies and DevOps best practices. Self-driven, innovative mindset with a commitment to delivering high-quality solutions. Experience with documenting complex data engineering systems and developing test plans. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Support the full data engineering lifecycle including research, proof of concepts, design, development, testing, deployment, and maintenance of data management solutions Utilize knowledge of various data management technologies to drive data engineering projects Working with Operations and Product Development staff to support applications/processes to facilitate the effective and efficient implementation/migration of new clients' healthcare data through the Optum Impact Product Suite Lead data acquisition efforts to gather data from various structured or semi-structured source systems of record to hydrate client data warehouse and power analytics across numerous health care domains Leverage combination of ETL/ELT methodologies to pull complex relational and dimensional data to support loading DataMart’s and reporting aggregates Eliminate unwarranted complexity and unneeded interdependencies Detect data quality issues, identify root causes, implement fixes, and manage data audits to mitigate data challenges Implement, modify, and maintain data integration efforts that improve data efficiency, reliability, and value Leverage and facilitate the evolution of best practices for data acquisition, transformation, storage, and aggregation that solve current challenges and reduce the risk of future challenges Effectively create data transformations that address business requirements and other constraints Partner with the broader analytics organization to make recommendations for changes to data systems and the architecture of data platforms Prepare high level design documents and detailed technical design documents with best practices to enable efficient data ingestion, transformation and data movement Leverage DevOps tools to enable code versioning and code deployment Leverage data pipeline monitoring tools to detect data integrity issues before they result into user visible outages or data quality issues Leverage processes and diagnostics tools to troubleshoot, maintain and optimize solutions and respond to customer and production issues Continuously support technical debt reduction, process transformation, and overall optimization Leverage and contribute to the evolution of standards for high quality documentation of data definitions, transformations, and processes to ensure data transparency, governance, and security Ensure that all solutions meet the business needs and requirements for security, scalability, and reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s Degree (preferably in information technology, engineering, math, computer science, analytics, engineering or other related field) 5+ years of combined experience in data engineering, ingestion, normalization, transformation, aggregation, structuring, and storage 5+ years of combined experience working with industry standard relational, dimensional or non-relational data storage systems 5+ years of experience in designing ETL/ELT solutions using tools like Informatica, DataStage, SSIS , PL/SQL, T-SQL, etc. 5+ years of experience in managing data assets using SQL, Python, Scala, VB.NET or other similar querying/coding language 3+ years of experience working with healthcare data or data to support healthcare organizations Preferred Qualifications 5+ years of experience in creating Source to Target Mappings and ETL design for integration of new/modified data streams into the data warehouse/data marts Experience in Unix or Powershell or other batch scripting languages Experience supporting data pipelines that power analytical content within common reporting and business intelligence platforms (e.g. Power BI, Qlik, Tableau, MicroStrategy, etc.) Experience supporting analytical capabilities inclusive of reporting, dashboards, extracts, BI tools, analytical web applications and other similar products Experience contributing to cross-functional efforts with proven success in creating healthcare insights Experience and credibility interacting with analytics and technology leadership teams Depth of experience and proven track record creating and maintaining sophisticated data frameworks for healthcare organizations Exposure to Azure, AWS, or google cloud ecosystems Exposure to Amazon Redshift, Amazon S3, Hadoop HDFS, Azure Blob, or similar big data storage and management components Demonstrated desire to continuously learn and seek new options and approaches to business challenges Willingness to leverage best practices, share knowledge, and improve the collective work of the team Demonstrated ability to effectively communicate concepts verbally and in writing Demonstrated awareness of when to appropriately escalate issues/risks Demonstrated excellent communication skills, both written and verbal At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 2 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Wissen Technology is Hirin g fo r Power BI Developer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges. Role Overview: We are seeking a skilled Power BI Developer to design and develop business intelligence solutions that turn data into actionable insights. You will collaborate with cross-functional teams to understand data requirements and build interactive dashboards, reports, and data models that support strategic decision-making. Experience : 3-6 Years Location: Bengaluru \ Key Responsibilities: Design, develop, and deploy Power BI reports and dashboards Connect Power BI to various data sources including SQL databases, Excel, APIs, and cloud platforms Create data models , DAX formulas , and measures for performance-optimized reports Understand business requirements and translate them into technical specs Automate report refreshes, implement row-level security, and maintain data accuracy Collaborate with stakeholders for UAT, feedback, and enhancements Troubleshoot and resolve reporting/data issues in a timely manner Required Skills: 3-6 years of hands-on experience in Power BI development Strong knowledge of DAX, Power Query (M Language), and data modeling Proficiency in writing complex SQL queries and working with RDBMS (MS SQL Server, Oracle, etc.) Experience working with Excel, CSV, and cloud-based data sources (Azure, AWS, etc.) Familiarity with data visualization best practices Strong communication and stakeholder management skills Preferred Skills Knowledge of Power Platform (PowerApps, Power Automate) Exposure to ETL tools (SSIS, Informatica, Talend) Experience with Agile/Scrum methodology Basic understanding of Python/R for data analysis is a plus Working knowledge of Azure Data Lake , Synapse Analytics , or Data Factory
Posted 2 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Job Summary NetApp is in search of a qualified professional to become a member of their IT Data Analytics Support team. The selected candidate will undertake IT support responsibilities, including incident management, ensuring data load completions in accordance with service level agreements (SLAs), and leading data quality initiatives. This individual is expected to proactively assess our environments for potential risks related to stability, performance, capacity, and security, while also providing strategic recommendations to accommodate both current and future growth. The ideal candidate must thrive in a fast-paced, high-demand environment and possess a strong team-oriented mindset. Job Requirements Proven experience in ETL processing of enterprise data, transofrming data from source systems to target environments such as an Enterprise Data Warehouse. Expertise in SQL scripting tailored for data warehouses and ETL processes is essential. Practical knowledge of data reconciliation techniques is highly desirable. A robust functional understanding in key areas such as Sales, Finance, Master Data Management, and Data Analytics systems is required. Mandatory Skills: Informatica IICS, Snowflake. Preferred Skills: Azure ADLS. Additional Skills: HVR for data replication. Education B-Tech in computer science, engineering or relevant field
Posted 2 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Hyderabad
Work from Office
Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We are committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a thriving internal community and are committed to creating a workplace that looks like the world that we serve. The Information Technology group delivers secure, reliable technology solutions that enable DTCC to be the trusted infrastructure of the global capital markets. The team delivers high-quality information through activities that include development of essential, building infrastructure capabilities to meet client needs and implementing data standards and governance Pay and Benefits: Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits, based on location Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact you will have in this role: The Enterprise Application Support role specializes in maintaining and providing technical support for all applications that are beyond the development stage and are running in the daily operations of the firm. This role works closely with development teams, infrastructure partners, and internal clients to advance and resolve technical support incidents. 3 days onsite is mandatory with 2 optional days remote work (Onsite Tuesdays, Wednesdays and a third day of your choosing) Maybe required to work Tuesday through Saturday or Sunday through Thursday on rotational or permanent basis. Your Primary Responsibilities: Experience with using ITIL Change, Incident and Problem management processes. Assist Major Incident calls and engaging the proper parties needed and helping to determine root cause. Troubleshoot and debug system component(s) to resolve technical issues in complex and highly regulated environments comprised of ground and cloud applications and services. Analyze proposed application design(s) and provide feedback on potential gaps or provide recommendations for optimization. Hands-on experience with Monitoring and Alerting processes in Distributed, Cloud and Mainframe environments. Knowledge and understanding of cyber security best practices and general security concepts like password rotation, access restriction and malware detection. Take part in Monthly Service Reviews (MSR) with Development partners to go over KPI metrics. Participate in Disaster Recovery / Loss of Region events (planned and unplanned) executing tasks and collecting evidence. Collaborate both within the team and across teams to resolve application issues and escalate as needed. Support audit requests in a timely fashion providing needed documentation and evidence. Plan and execute certificate creation/renewals as needed. Monitor Dashboards to better catch potential issues and aide in observability. Help gather and analyze project requirements and translate them into technical specification(s). Basic understanding of all lifecycle components (code, test, deploy). Good verbal and written communication and interpersonal skills, communicating openly with team members and others. Contribute to a culture where honesty and transparency are expected. On-call support with flexible work arrangement. **NOTE: The Primary Responsibilities of this role are not limited to the details above. ** Qualifications: Minimum of 3 years of relevant Production support experience. Bachelors degree preferred or equivalent experience. Talents Needed for Success: Technical Qualifications (Distributed/Cloud): Hands on experience in Unix, Linux, Windows, SQL/PLSQL Familiarity working with relational databases (DB2, Oracle, Snowflake) Monitoring and Data Tools experience (Splunk, DynaTrace, Thousand Eyes, Grafana, Selenium, IBM Zolda) Cloud Technologies (AWS services (S3,EC2,Lambda,SQS,IAM roles), Azure, OpenShift, RDS Aurora, Postgress) Scheduling Tool experience (CA AutoSys, Control-M) Middleware experience (Solace, Tomcat, Liberty Server, WebSphere, WebLogic, JBoss) Messaging Queue Systems (IBM MQ, Oracle AQ, ActiveMQ, RabbitMQ, Kafka) Scripting languages (Bash, Python, Ruby, Shell, Perl, JavaScript) Hands on experience with ETL tools (Informatica Datahub/IDQ, Talend ) Technical Qualifications (Mainframe): Mainframe troubleshooting and support skills (COBOL, JCL, DB2, DB2 Stored Procedures, CICS, SPUFI, File aid) Mainframe scheduling (Job abends, Predecessor/Successor) Actual salary is determined based on the role, location, individual experience, skills, and other considerations. Please contact us to request accommodation.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The informatica job market in India is thriving with numerous opportunities for skilled professionals in this field. Companies across various industries are actively hiring informatica experts to manage and optimize their data integration and data quality processes.
The average salary range for informatica professionals in India varies based on experience and expertise: - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum
A typical career progression in the informatica field may include roles such as: - Junior Developer - Informatica Developer - Senior Developer - Informatica Tech Lead - Informatica Architect
In addition to informatica expertise, professionals in this field are often expected to have skills in: - SQL - Data warehousing - ETL tools - Data modeling - Data analysis
As you prepare for informatica job opportunities in India, make sure to enhance your skills, stay updated with the latest trends in data integration, and approach interviews with confidence. With the right knowledge and expertise, you can excel in the informatica field and secure rewarding career opportunities. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.