Home
Jobs

2362 Informatica Jobs - Page 23

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

6 - 8 Lacs

Noida

On-site

GlassDoor logo

You deserve to do what you love, and love what you do – a career that works as hard for you as you do. At Fiserv, we are more than 40,000 #FiservProud innovators delivering superior value for our clients through leading technology, targeted innovation and excellence in everything we do. You have choices – if you strive to be a part of a team driven to create with purpose, now is your chance to Find your Forward with Fiserv. Responsibilities Requisition ID R-10358179 Date posted 06/11/2025 End Date 07/15/2025 City Noida State/Region Uttar Pradesh Country India Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Specialist, Data Architecture What does a successful Lead, Data Conversions do? A Conversion Lead is responsible for timely and accurate conversion of new and existing Bank/Client data to Fiserv systems, from both internal and external sources. This role is responsible to provide data analysis for client projects and to accommodate other ad hoc data updates to meet client requests. As part of the overall Service Delivery organization, a Conversion Lead plays a critical role in mapping in data to support project initiatives for new and existing banks. Leads provide a specialized service to the Project Manager teams—developing custom reporting, providing technical assistance, and ensuring project timelines are met. Working with financial services data means a high priority on accuracy and adherence to procedures and guidelines. What you will do A Conversion Lead is responsible for timely and accurate conversion of new and existing Bank/Client data to Fiserv systems, from both internal and external sources. This role is responsible to provide data analysis for client projects and to accommodate other ad hoc data updates to meet client requests. As part of the overall Service Delivery organization, a Conversion Lead plays a critical role in mapping in data to support project initiatives for new and existing banks/clients. Lead provides a specialized service to the Project Manager teams—developing custom reporting, providing technical assistance, and ensuring project timelines are met. Working with financial services data means a high priority on accuracy and adherence to procedures and guidelines. The person stepping in as the backup would need to review the specifications history and then review and understand the code that was being developed to resolve the issue and or change. This would also have to occur on the switch back to the original developer. Today, the associate handling the project would log back in to support the effort and address the issue and or change. What you will need to have Bachelor’s degree in programming or related field Working Hours (IST): 12:00 p.m. – 09:00 p.m. (IST) Monday through Friday Highest attention to detail and accuracy Team player with ability to work independently Ability to manage and prioritize work queue across multiple workstreams Strong communication skills and ability to provide technical information to non-technical colleagues What would be great to have Experience with Data Modelling, Informatica, Power BI, MS Visual Basic, Microsoft Access and Microsoft Excel required. Experience with Card Management systems, debit card processing is a plus Understanding Applications and related database features that can be leveraged to improve performance Experience of creating testing artifacts (test cases, test plans) and knowledge of various testing types. 8 – 12 years’ Experience and strong knowledge of MS SQL/PSQL, MS SSIS and Data warehousing concepts Should have strong database fundamentals and Expert knowledge in writing SQL commands, queries and stored procedures Experience in Performance Tuning of SQL complex queries. Strong communication skills and ability to provide technical information to non-technical colleagues. Ability to mentor junior team members Ability to manage and prioritize work queue across multiple workstreams. Team player with ability to work independently. Experience in full software development life cycle using agile methodologies. Should have good understanding of Agile methodologies and can handle agile ceremonies. Efficient in Reviewing, Analyzing, coding, testing, and debugging of application programs. Should be able to work under pressure while resolving critical issues in Prod environment. Good communication skills and experience in working with Clients. Good understanding in Banking Domain. Minimum 8 years’ relevant experience in data processing (ETL) conversions or financial services industry Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Jaipur

On-site

GlassDoor logo

ABOUT HAKKODA Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are looking for people experienced with data architecture, design and development of database mapping and migration processes. This person will have direct experience optimizing new and current databases, data pipelines and implementing advanced capabilities while ensuring data integrity and security. Ideal candidates will have strong communication skills and the ability to guide clients and project team members. Acting as a key point of contact for direction and expertise. Key Responsibilities Design, develop, and optimize database architectures and data pipelines. Ensure data integrity and security across all databases and data pipelines. Lead and guide clients and project team members, acting as a key point of contact for direction and expertise. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Manage and support large-scale technology programs, ensuring they meet business objectives and compliance requirements. Develop and implement migration, dev/ops, and ETL/ELT ingestion pipelines using tools such as DataStage, Informatica, and Matillion. Utilize project management skills to work effectively within Scrum and Agile Development methods. Create and leverage metrics to develop actionable and measurable insights, influencing business decisions. Qualifications 7+ years of proven work experience in data warehousing, business intelligence (BI), and analytics. 3+ years of experience as a Data Architect. 3+ years of experience working on Cloud platforms (AWS, Azure, GCP). Bachelor's Degree (BA/BS) in Computer Science, Information Systems, Mathematics, MIS, or a related field. Strong understanding of migration processes, dev/ops, and ETL/ELT ingestion pipelines. Proficient in tools such as DataStage, Informatica, and Matillion. Excellent project management skills and experience with Scrum and Agile Development methods. Ability to develop actionable and measurable insights and create metrics to influence business decisions. Previous consulting experience managing and supporting large-scale technology programs. Nice to Have 6-12 months of experience working with Snowflake. Understanding of Snowflake design patterns and migration architectures. Knowledge of Snowflake roles, user security, and capabilities like Snowpipe. Proficiency in SQL scripting. Cloud experience on AWS (Azure and GCP are also beneficial) Python scripting skills. Benefits: Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? \uD83D\uDE80 \uD83D\uDCBB Apply today\uD83D\uDC47 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here.

Posted 1 week ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

P2 C1 STS Primary- Informatica IDQ 9 or higher Secondary- PL/SQL skills JD- IDQ/Informatica Developer experience working on transformations, mapplet, mapping and Scorecards Informatica Axon tool experience Experience Translating business rules into IDQ Rules, IDQ application deployment, Schedulers Knowledge with IDQ repository objects IDQ Performance management & Access management Understands concept for Data governance Experience with IDQ and Axon integration Informatica PowerCenter experience (9 higher) Experience with unix scripting IBM Information analyzer : Good to have Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Role: IICS Developer Work Mode: Hybrid Work timings: 2pm to 11pm Location: Chennai & Hyderabad Primary Skills: IICS Job Summary We are looking for a highly experienced Senior Lead Data Engineer role with strong expertise in Informatica IICS, Snowflake, Unix/Linux Shell Scripting, CI/CD tools, Agile, and cloud platforms. The ideal candidate will lead complex data engineering initiatives, optimize data architecture, and drive automation while ensuring high standards of data quality and governance within an agile environment. Required Qualifications Required Minimum 5+ years of experience in data warehousing and data warehouse concepts. Extensive experience in Informatica IICS, and Snowflake. Experience in designing, developing, and maintaining data integration solutions using IICS. Experience in designing, implementing, and optimizing data storage and processing solutions using Snowflake. Design and execute complex SQL queries for data extraction, transformation, and analysis. Strong proficiency in Unix/Linux shell scripting and SQL. Extensive expertise in CI/CD tools and ESP Scheduling. Experience working in agile environments, with a focus on iterative improvements and collaboration. Knowledge of SAP Data Services is an added advantage. Expertise in cloud platforms (AWS, Azure). Proven track record in data warehousing, data integration, and data governance. Excellent data analysis and data profiling skills Collaborate with stakeholders to define data requirements and develop effective data strategies. Strong leadership and communication skills, with the ability to drive strategic data initiatives. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

requirement to work in night shifts Should have 5 years of experience in Abnitio, ETL Informatica, AWS. Develop, test, and maintain ETL workflows using Informatica or Ab Initio under senior guidance. Monitor and manage batch jobs using Autosys or Control-M. Write SQL queries for data extraction and transformation. Collaborate with QA, BA, and senior team members for issue resolution. Document code, job schedules, and workflows. Assist in basic performance monitoring using Dynatrace. Show more Show less

Posted 1 week ago

Apply

4.0 - 7.0 years

10 - 19 Lacs

Bengaluru

Hybrid

Naukri logo

Job Description Experience 4 to 7 years. Experience in any ETL tools [e.g. DataStage] with implementation experience in large Data Warehouse Proficiency in programming languages such as Python etc. Experience with data warehousing solutions (e.g., Snowflake, Redshift) and big data technologies (e.g., Hadoop, Spark). Strong knowledge of SQL and database management systems. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and data pipeline orchestration tools (e.g. Airflow). Proven ability to lead and develop high-performing teams, with excellent communication and interpersonal skills. Strong analytical and problem-solving abilities, with a focus on delivering actionable insights. Responsibilities Design, develop, and maintain advanced data pipelines and ETL processes using niche technologies. Collaborate with cross-functional teams to understand complex data requirements and deliver tailored solutions. Ensure data quality and integrity by implementing robust data validation and monitoring processes. Optimize data systems for performance, scalability, and reliability. Develop comprehensive documentation for data engineering processes and systems.

Posted 1 week ago

Apply

5.0 - 10.0 years

5 - 15 Lacs

Bengaluru

Remote

Naukri logo

Title: Data Quality Analyst/Developer Duration: 6 months to 1 year contract Location: Remote Notice period - Immediate to 7 days UAN /EPFO Report Required Work Experience: 5 + years of this experience - Experience doing Data Emendation Design/Develop Rules, monitoring mechanisms, notification Design/Develop UI, Workflows, security Design/Develop analytics (overall DQ reporting, usage statistics, etc). Design/Develop migration activities to migrate existing DQ assets between our existing DQ platform and new DQ platform. Design integration with MDM & Catalog (as needed) Monitor system performance and suggest optimization strategies (as needed). Work with DT to maintain system - patches, backups, etc. Work with LYB's Data Stewards to support their governance activities. Testing The DQ Analyst/Developer should have experience with IMDC (for the sake of our example) cloud DQ and observability, JSON (depending on tool) Deep SQL skills, Integration tools/methodologies - API as well as ETL, Data Analysis, Snowflake or Databricks knowledge (for lineage), Power BI (nice to have), SAP ECC knowledge (nice to have), experience with cloud platforms (Azure, AWS, Google). If you are interested please share required details along with resume Full Name: Current or Previous organization: Current Location: Total Experience: Relevant experience as Python Developer: how many years of experience In Azure, AWS, Google How many years of experience in UI, Workflows, security Working as full time or contract: Reason for job change: Any other offers inhand: Current CTC: expected CTC: Notice Period: email id: contact Number : Domain name: are you ok to work Cotractual role?: share your aadhar or pan card for the verification

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Experience :- 10 - 15 Years. Job Description Lead execution of the assigned projects & responsible for end to end execution Lead, guide and support the design and implementation of targeted strategies including identification of change impacts to people, process, policy, and structure, stakeholder identification and alignment, appropriate communication and feedback loops, success measures, training, organizational readiness, and long-term sustainability Manage the day-to-day activities, including scope, financials (e.g. business case, budget), resourcing (e.g. Full-time employees, roles and responsibilities, utilization), timelines and toll gates and risks Implement project review and quality assurance to ensure successful execution of goals and stakeholder satisfaction Consistently report and review progress to the Program Lead, Steering group and relevant stakeholders Will involve in more than one projects or will work across a portfolio of projects Identify improvement and efficiency opportunities across the projects Analyze data, evaluate results, and develop recommendations and road maps across multiple workstreams Build and maintain effective partnerships with key cross functional leaders and project team members across functions such as Finance & Technology Experience Experience of working as a Project Manager/ Scrum Master as a service provider (not in internal projects) Knowledge of functional supply chain and planning processes, including ERP/MRP, capacity planning, and managing planning activities with contract manufacturers - Good to have. Experience in implementing ERP systems such as SAP and Oracle - good to have. Not mandatory. Experience in systems integration and ETL tools such as Informatica and Talend a plus Experience with data mapping and systems integration a plus Functional knowledge of supply chain or after sales service operations a plus Outstanding drive, excellent interpersonal skills and the ability to communicate effectively, both verbally and in writing, and to immediately contribute in a team environment An ability to prioritize and perform well in a fast-paced environment, while maintaining a high level of client focus Demonstrable track record of delivery and impact in managing/delivering transformation, with minimum 6-9 years’ experience in project management & business transformation Experience in managing Technology Projects(data analysis, visualization, app development etc) along with atleast in one function such as Procurement Domain, process improvement, continuous improvement, change management, operating model design Has performed the role of a scrum master or managed a project having scrum teams Has managed projects with stakeholders in multi-location landscape Past experience in managing analytics projects will be a huge plus Education Understanding & application of Agile and waterfall methodology Exposure to tools and applications such as Microsoft Project, Jira, Confluence, PowerBI, Alteryx Understanding of Lean Six Sigma Preferably a post graduate - MBA though not mandatory Expectation Excellent interpersonal (communication and presentation) and organizational skills · Problem solving abilities and a can-do attitude Confident, proactive self-starters, comfortable in managing and engaging others Effective in engaging, partnering with and influencing stakeholders across the matrix up to VP level Ability to move fluidly between big picture and detail always keeping the end goal in mind Inclination toward collaborative partnership, and able to help establish/be part of high performing teams for impact Highly diligent with close eye for detail. Delivers quality outputs Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Customer Success Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. The Data Technical Consultant is a demonstrated expert in technical and/or functional aspects of customer and partner engagements that lead to the successful delivery of data management projects. The Data Architect plays a critical role for setting customers up for success by prescriptively helping to shape and then execute in the Salesforce data space. This role also provides subject matter expertise related to the data management solutions and ensures successful project delivery. Requirements 6+ years’ experience working on complex data projects including migrations, integrations, data architecture and data governance Ability to convert high-level requirements into a solution – i.e. you do not need a detailed specification to be successful Solid experience with an ETL any of the mentioned tools (ex. SSIS, Boomi, or Informatica Power Centre, Mulesoft) Experienced technology leader with extensive Data Quality, Data Management, Data Security, Data Governance skills Proficient with PL/SQL query writing and strong relational database background/understanding. Basic Understanding of DBamp. Experience on SF Data Loader and writing SOQL is an asset. Should expose himself as SME in Migration tools. Experience with BI tools like Tableau, Qlik, Salesforce Analytics is an asset Experience with master data management projects and practices is an asset. Candidates should have about at least 2 years of experience in Salesforce.com and should have a thorough understanding of the Salesforce.com project lifecycle. The Candidate should have good organizational and customer service skills so that the clients are happy with the working of the organization. A strong knowledge of enterprise analytics, big data platforms, data exchange models, CRM, MDM, and cloud integration is preferred. Data modeling experience; specifically designing logical models/data dictionaries from business requirements Applicants should be able to give Architectural/Design insight in the project if required. Implementation experience and domain knowledge in Salesforce Sales and Service functionality and any of the following multiple CRM subject areas. Excellent Mentorship and team handling skills. Excellent Troubleshooting Skills. Good understanding of Internet technologies: firewalls, web servers, web proxy servers, etc. Responsibilities Of a Data Consultant Eliciting data requirements during business solution design sessions and translating these into a solution Data modelling for custom salesforce.com projects/functionality Provide data governance thought leadership to clients so that they can manage and use their data to its fullest Conducting data audits, cleansing and de-deduplication projects to clean data Conducting analysis of customer data to identify key predictors of activity: from lead management through to forecasting Developing ETL processes for data migration and integration projects. Own and aggressively drive forward specific areas of technology architecture. Provide architectural solutions/designs to project execution teams for implementation. Lead projects within architecture. Work with Product Owner/Business Analysts to understand functional requirements and interact with other cross-functional teams to architect, design, develop, test, and release features. Team Level Supports the day-to-day progress of their team member’s career goals as set in their Career Development Plan Mentors and coaches their team members through regular one-on-ones with a primary focus on the People and secondly Project perspective Guides new team members through their onboarding plan. Motivates and inspires team members by showing appreciation and giving recognition for, not just hard work, but great work, and highlighting alignment with Traction’s values Support multiple teams with planning, scoping and creation of technical solutions for the new product capabilities, through continuous delivery to production. Liaise with team and clients for resolving technical dependencies, issues, and risks. Drive common vision, practices and capabilities across teams. Work among a team of consultants to deliver Salesforce.com to our customers for all project stages (requirements definition, planning, functional design, prototyping, development, test, issue resolution, deployment, and support). Actively Participation on Guild and motivate team on initiatives Motivate Team to use ReuseIT Traction products to existing Assets and Create reusable apps to add more values. Analyse and identify gaps in functional/business requirements and should be able to effectively communicate with both Business and Functional analysts on the same. He will manage more than 2 projects at a time and manage 2+ Junior resources and above level population and make sure their goals are aligned with company objectives. Deliver regular positive and constructive feedback to the team. Should be able to assess the impacts on technical design because of the changes in functional requirements. Serve as a mentor to the team and actively engage as part of the project on assigned client engagements to ensure timely delivery as per the best practices. Others Making sure his/her reportees aligned with company V2MOM. Ability to work under challenging environments and timelines. Willingness to learn new technologies. Ability to maintain cordial client relationship Good Communication and presentation skills Should be willing to travel Certification Requirements Desired to have Salesforce Certified Administrator (201) Desired to have Salesforce Certified Sales Cloud Consultant or Salesforce Certified Service Cloud Consultant Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary We are looking for an experienced SAP HANA / SQL Developer to design, develop, and implement high-performance data solutions for analytics and reporting. The ideal candidate will have strong expertise in SQL programming , SAP HANA development , and ETL tools such as Azure Data Factory, SAP BODS, Informatica, or SSIS. The role involves working on data warehousing, reporting applications, and end-to-end data integration projects. Key Responsibilities Analyze, plan, design, and implement SAP HANA solutions based on business requirements. Develop complex SQL queries, stored procedures, and manage data loads effectively. Translate functional requirements into technical design documents. Troubleshoot and resolve issues related to SQL models, views, and indexes. Perform effort estimation for development and enhancement activities. Collaborate with cross-functional teams including QA, SMEs, and infrastructure teams to ensure successful delivery. Ensure high standards of usability, performance, reliability, and data security. Create internal documentation and end-user training materials as needed. Follow best practices and contribute to coding standards and continuous improvement. Must-Have Skills Strong experience in SAP HANA development (views, stored procedures, indexes, performance tuning). Proficiency in SQL Programming with complex data models and stored procedures. Hands-on experience with at least one ETL tool: Azure Data Factory, SAP BODS, Informatica, or SSIS Solid understanding of Data Warehousing, Analytics, and Reporting applications. Good To Have Experience with Azure SQL or other cloud technologies. Background in data security and access control within enterprise applications. Soft Skills Strong written and verbal communication skills. Analytical thinker with excellent problem-solving abilities. Self-motivated, proactive, and able to take ownership of deliverables. Team-oriented and collaborative mindset. Skills: bods,performance tuning,data warehousing,sql,azure data factory,informatica,data models,sql programming,sap,sap bods,sap hana,analytical skills,ssis,technical documentation,problem-solving,sap hana development,etl development,hana,analytics,data security,etl tools,azure,access control,communication skills,reporting applications,etl Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

This role is for one of the Weekday's clients Min Experience: 4 years Location: Ahmedabad JobType: full-time We are seeking a highly skilled Senior Database Administrator with 5-8 years of experience in data engineering and database management. The ideal candidate will have a strong foundation in data architecture, modeling, and pipeline orchestration. Hands-on experience with modern database technologies and exposure to generative AI tools in production environments will be a significant advantage. This role involves leading efforts to streamline data workflows, improve automation, and deliver high-impact insights across the organization. Requirements Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines (ETL/ELT) across multiple database systems. Architect and maintain high-availability, secure, and scalable data storage solutions. Utilize generative AI tools to automate data workflows and enhance system capabilities. Collaborate with engineering, analytics, and data science teams to fulfill data requirements and optimize data delivery. Implement and monitor data quality standards, governance practices, and compliance protocols. Document data architectures, systems, and processes for transparency and maintainability. Apply data modeling best practices to support optimal storage and querying performance. Continuously research and integrate emerging technologies to advance the data infrastructure. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or related field. 5-8 years of experience in database administration and data engineering for large-scale systems. Proven experience in designing and managing relational and non-relational databases. Mandatory Skills: SQL - Proficient in advanced queries, performance tuning, and database management. NoSQL - Experience with at least one NoSQL database such as MongoDB, Cassandra, or CosmosDB. Hands-on experience with at least one of the following cloud data warehouses: Snowflake, Redshift, BigQuery, or Microsoft Fabric. Cloud expertise - Strong experience with Azure and its data services. Working knowledge of Python for scripting and data processing (e.g., Pandas, PySpark). Experience with ETL tools such as Apache Airflow, Microsoft Fabric, Informatica, or Talend. Familiarity with generative AI tools and their integration into data pipelines. Preferred Skills & Competencies: Deep understanding of database performance, tuning, backup, recovery, and security. Strong knowledge of data governance, data quality management, and metadata handling. Experience with Git or other version control systems. Familiarity with AI/ML-driven data solutions is a plus. Excellent problem-solving skills and the ability to resolve complex database issues. Strong communication skills to collaborate with cross-functional teams and stakeholders. Demonstrated ability to manage projects and mentor junior team members. Passion for staying updated with the latest trends and best practices in database and data engineering technologies. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Company Description ThreatXIntel is a startup cyber security company dedicated to providing customized, affordable solutions to protect businesses and organizations from cyber threats. Our services include cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We take a proactive approach to security, continuously monitoring and testing our clients' digital environments to identify vulnerabilities before they can be exploited. Role Description We are looking for a freelance Data Engineer with strong experience in PySpark and AWS data services, particularly S3 and Redshift . The ideal candidate will also have some familiarity with integrating or handling data from Salesforce . This role focuses on building scalable data pipelines, transforming large datasets, and enabling efficient data analytics and reporting. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark for large-scale data processing. Manage data ingestion, storage, and transformation across AWS S3 and Redshift . Design data flows and schemas to support reporting, analytics, and business intelligence needs. Perform incremental loads, partitioning, and performance tuning in distributed environments. Extract and integrate relevant datasets from Salesforce for downstream processing. Ensure data quality, consistency, and availability for analytics teams. Collaborate with data analysts, platform engineers, and business stakeholders. Required Skills: Strong hands-on experience with PySpark for large-scale distributed data processing. Proven track record working with AWS S3 (data lake) and Amazon Redshift (data warehouse). Ability to write complex SQL queries for transformation and reporting. Basic understanding or experience integrating data from Salesforce (APIs or exports). Experience with performance optimization, partitioning strategies, and efficient schema design. Knowledge of version control and collaborative development tools (e.g., Git). Nice to Have: Experience with AWS Glue or Lambda for orchestration. Familiarity with Salesforce objects, SOQL, or ETL tools like Talend, Informatica, or Airflow. Understanding of data governance and security best practices in cloud environments. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

We are looking for a Salesforce Marketing Cloud Expert to lead the stand-up and enablement of a newly independent marketing automation capability within a leading pharmaceutical organization. This role is ideal for a hands-on professional who can drive implementation, guide internal marketing teams, and support light development and integration work. Title: Salesforce Marketing Cloud Expert Location: Remote, India Requirements: Proven experience implementing and managing Salesforce Marketing Cloud environments Strong understanding of email and SMS campaign management within Marketing Cloud Experience integrating Salesforce Marketing Cloud with Salesforce CRM and external web assets Familiarity with basic development tasks related to automation, scripting, or API-based integrations Previous experience in marketing enablement or guiding non-technical marketing teams on campaign execution Exposure to Informatica and its role in marketing data flow is a plus Strong communication and documentation skills; ability to bridge the gap between technical setup and marketing strategy Responsibilities: Lead the setup and configuration of a new Salesforce Marketing Cloud instance Migrate existing campaign assets, templates, and data workflows from the clients shared service Integrate Marketing Cloud with the broader Salesforce ecosystem and external website components Provide light development and technical support on automation and integration tasks Enable and train internal marketing team to run their own campaigns effectively Support strategy and execution for email and SMS campaigns Reduce agency dependency by building internal confidence and capability with the toolset Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Delhi Cantonment, Delhi, India

Remote

Linkedin logo

About Smart Working At Smart Working, we believe your job should not only look right on paper but also feel right every day. This isn’t just another remote opportunity - it’s about finding where you truly belong, no matter where you are. From day one, you’re welcomed into a genuine community that values your growth and well-being. Our mission is simple: to break down geographic barriers and connect skilled professionals with outstanding global teams and products for full-time, long-term roles. We help you discover meaningful work with teams that invest in your success, where you’re empowered to grow personally and professionally. Join one of the highest-rated workplaces on Glassdoor and experience what it means to thrive in a truly remote-first world. About the Role We’re seeking a customer-facing, technically expert Senior Integrations Engineer to design and lead complex integration solutions across cloud and enterprise environments. This is a leadership-level engineering role, not a heads-down coding position; you’ll be architecting solutions, collaborating directly with stakeholders, and driving project outcomes. The ideal candidate is adaptable and curious, thriving in ambiguous environments, quick to learn new tools, and eager to solve integration challenges. You’ll work closely with clients across technical and non-technical audiences, so strong communication and the ability to lead without formal authority are essential. The technical workload is approximately 80% back-end (APIs, middleware, data flows, architecture) and 20% front-end/UI configuration to support user interaction with automation workflows. What You’ll Be Doing Architect Scalable Solutions: Design secure, high-performance integrations using Workato (ideal) or similar platforms like MuleSoft, Boomi, or Informatica Drive Delivery: Lead technical workstreams, manage timelines and dependencies, and ensure successful end-to-end project execution across diverse teams Establish Best Practices: Set integration standards, promote reusable design patterns, and conduct knowledge transfer sessions to build long-term capability Be a Trusted Advisor: Translate complex requirements into actionable solutions, present designs to stakeholders, and guide customers through integration decisions Must-Have Skills Integration Platform Expertise: 5+ years of hands-on experience with iPaaS/middleware tools (Workato preferred, or MuleSoft, Boomi, Informatica, etc.) API & Architecture Knowledge: Deep understanding of REST, SOAP, GraphQL , and common integration patterns (event-driven, batch, pub/sub, etc.) Enterprise-Grade Security: Familiarity with authentication/authorization methods (OAuth, SAML, JWT), secure data handling, and compliance frameworks Project Leadership: Proven experience leading integration projects, owning architecture decisions, and coordinating technical delivery Business-Focused Engineering: Able to gather business requirements and turn them into scalable, reusable integration frameworks Nice to Have Skills Cloud & Automation Stack: Familiarity with SaaS platforms, iPaaS tools, and cloud providers (AWS, GCP, Azure) Programming & Web Skills: Exposure to OOP, scripting, JSON/XML, and front-end basics (HTML, JS, iframes) Emerging Tech Awareness: Bonus points for experience with AI/ML concepts, DevOps tools, or enterprise systems like Salesforce and SAP Consulting or Solution Architecture Experience: Previous customer-facing technical roles are highly valued What Makes You a Great Fit Adaptable & Curious: Comfortable navigating changing requirements and unfamiliar technologies with a solution-first mindset Excellent Communicator: Able to clearly explain technical ideas to both engineers and executive stakeholders Influential Without Authority: Skilled in leading cross-functional initiatives and establishing alignment without formal management Results-Oriented & Proactive: Takes initiative, anticipates roadblocks, and ensures delivery without waiting for detailed instructions Why Smart Workers Love It Here Fixed Shifts: 12:00 PM - 9:30 PM IST (Summer) | 1:00 PM - 10:30 PM IST (Winter) No Weekend Work: Enjoy a real work-life balance Day 1 Benefits: Laptop and full medical insurance provided from your first day Support That Matters: Mentorship, community, and peer forums that support your growth True Belonging: A long-term career path where your contributions are seen and celebrated At Smart Working, you’ll never be just another remote hire. Be a Smart Worker - valued, empowered, and part of a culture that celebrates integrity, excellence, and ambition. If that sounds like your kind of place, we’d love to hear your story. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

About Smart Working At Smart Working, we believe your job should not only look right on paper but also feel right every day. This isn’t just another remote opportunity - it’s about finding where you truly belong, no matter where you are. From day one, you’re welcomed into a genuine community that values your growth and well-being. Our mission is simple: to break down geographic barriers and connect skilled professionals with outstanding global teams and products for full-time, long-term roles. We help you discover meaningful work with teams that invest in your success, where you’re empowered to grow personally and professionally. Join one of the highest-rated workplaces on Glassdoor and experience what it means to thrive in a truly remote-first world. About the Role We’re seeking a customer-facing, technically expert Senior Integrations Engineer to design and lead complex integration solutions across cloud and enterprise environments. This is a leadership-level engineering role, not a heads-down coding position; you’ll be architecting solutions, collaborating directly with stakeholders, and driving project outcomes. The ideal candidate is adaptable and curious, thriving in ambiguous environments, quick to learn new tools, and eager to solve integration challenges. You’ll work closely with clients across technical and non-technical audiences, so strong communication and the ability to lead without formal authority are essential. The technical workload is approximately 80% back-end (APIs, middleware, data flows, architecture) and 20% front-end/UI configuration to support user interaction with automation workflows. What You’ll Be Doing Architect Scalable Solutions: Design secure, high-performance integrations using Workato (ideal) or similar platforms like MuleSoft, Boomi, or Informatica Drive Delivery: Lead technical workstreams, manage timelines and dependencies, and ensure successful end-to-end project execution across diverse teams Establish Best Practices: Set integration standards, promote reusable design patterns, and conduct knowledge transfer sessions to build long-term capability Be a Trusted Advisor: Translate complex requirements into actionable solutions, present designs to stakeholders, and guide customers through integration decisions Must-Have Skills Integration Platform Expertise: 5+ years of hands-on experience with iPaaS/middleware tools (Workato preferred, or MuleSoft, Boomi, Informatica, etc.) API & Architecture Knowledge: Deep understanding of REST, SOAP, GraphQL , and common integration patterns (event-driven, batch, pub/sub, etc.) Enterprise-Grade Security: Familiarity with authentication/authorization methods (OAuth, SAML, JWT), secure data handling, and compliance frameworks Project Leadership: Proven experience leading integration projects, owning architecture decisions, and coordinating technical delivery Business-Focused Engineering: Able to gather business requirements and turn them into scalable, reusable integration frameworks Nice to Have Skills Cloud & Automation Stack: Familiarity with SaaS platforms, iPaaS tools, and cloud providers (AWS, GCP, Azure) Programming & Web Skills: Exposure to OOP, scripting, JSON/XML, and front-end basics (HTML, JS, iframes) Emerging Tech Awareness: Bonus points for experience with AI/ML concepts, DevOps tools, or enterprise systems like Salesforce and SAP Consulting or Solution Architecture Experience: Previous customer-facing technical roles are highly valued What Makes You a Great Fit Adaptable & Curious: Comfortable navigating changing requirements and unfamiliar technologies with a solution-first mindset Excellent Communicator: Able to clearly explain technical ideas to both engineers and executive stakeholders Influential Without Authority: Skilled in leading cross-functional initiatives and establishing alignment without formal management Results-Oriented & Proactive: Takes initiative, anticipates roadblocks, and ensures delivery without waiting for detailed instructions Why Smart Workers Love It Here Fixed Shifts: 12:00 PM - 9:30 PM IST (Summer) | 1:00 PM - 10:30 PM IST (Winter) No Weekend Work: Enjoy a real work-life balance Day 1 Benefits: Laptop and full medical insurance provided from your first day Support That Matters: Mentorship, community, and peer forums that support your growth True Belonging: A long-term career path where your contributions are seen and celebrated At Smart Working, you’ll never be just another remote hire. Be a Smart Worker - valued, empowered, and part of a culture that celebrates integrity, excellence, and ambition. If that sounds like your kind of place, we’d love to hear your story. Show more Show less

Posted 1 week ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Business Data Technologies (BDT) makes it easier for teams across Amazon to produce, store, catalog, secure, move, and analyze data at massive scale. Our managed solutions combine standard AWS tooling, open-source products, and custom services to free teams from worrying about the complexities of operating at Amazon scale. This lets BDT customers move beyond the engineering and operational burden associated with managing and scaling platforms, and instead focus on scaling the value they can glean from their data, both for their customers and their teams. We own the one of the biggest (largest) data lakes for Amazon where 1000’s of Amazon teams can search, share, and store EB (Exabytes) of data in a secure and seamless way; using our solutions, teams around the world can schedule/process millions of workloads on a daily basis. We provide enterprise solutions that focus on compliance, security, integrity, and cost efficiency of operating and managing EBs of Amazon data. Key job responsibilities Core Responsibilities Be hands-on with ETL to build data pipelines to support automated reporting Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift. Model data and metadata for ad-hoc and pre-built reporting Interface with business customers, gathering requirements and delivering complete reporting solutions Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Participate in strategic & tactical planning discussions A day in the life As a Data Engineer, You Will Be Working With Cross-functional Partners From Science, Product, SDEs, Operations And Leadership To Translate Raw Data Into Actionable Insights For Stakeholders, Empowering Them To Make Data-driven Decisions. Some Of The Key Activities Include Crafting the Data Flow: Design and build data pipelines, the backbone of our data ecosystem. Ensure the integrity of the data journey by implementing robust data quality checks and monitoring processes. Architect for Insights: Translate complex business requirements into efficient data models that optimize data analysis and reporting. Automate data processing tasks to streamline workflows and improve efficiency. Become a data detective! ensuring data availability and performance Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Knowledge of cloud services such as AWS or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3006419 Show more Show less

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

You deserve to do what you love, and love what you do – a career that works as hard for you as you do. At Fiserv, we are more than 40,000 #FiservProud innovators delivering superior value for our clients through leading technology, targeted innovation and excellence in everything we do. You have choices – if you strive to be a part of a team driven to create with purpose, now is your chance to Find your Forward with Fiserv. Responsibilities Requisition ID R-10356381 Date posted 06/12/2025 End Date 06/27/2025 City Pune State/Region Maharashtra Country India Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Tech Lead, Data Architecture What does a successful Snowflakes Advisor do? We are seeking a highly skilled and experienced Snowflake Advisor to take ownership of our data warehousing strategy, implementation, maintenance and support. In this role, you will design, develop, and lead the adoption of Snowflake-based solutions to ensure scalable, efficient, and secure data systems that empower our business analytics and decision-making processes. As a Snowflake Advisor, you will collaborate with cross-functional teams, lead data initiatives, and act as the subject matter expert for Snowflake across the organization. What you will do: Define and implement best practices for data modelling, schema design, query optimization in Snowflakes Develop and manage ETL/ELT workflows to ingest, transform and load data into Snowflakes from various resources Integrate data from diverse systems like databases, API`s, flat files, cloud storage etc. into Snowflakes. Using tools like Streamsets, Informatica or dbt to streamline data transformation processes Monitor or tune Snowflake performance including warehouse sizing, query optimizing and storage management. Manage Snowflakes caching, clustering and partitioning to improve efficiency Analyze and resolve query performance bottlenecks Monitor and resolve data quality issues within the warehouse Collaboration with data analysts, data engineers and business users to understand reporting and analytic needs Work closely with DevOps team for Automation, deployment and monitoring Plan and execute strategies for scaling Snowflakes environments as data volume grows Monitor system health and proactively identify and resolve issues Implement automations for regular tasks Enable seamless integration of Snowflakes with BI Tools like Power BI and create Dashboards Support ad hoc query requests while maintaining system performance Creating and maintaining documentation related to data warehouse architecture, data flow, and processes Providing technical support, troubleshooting, and guidance to users accessing the data warehouse Optimize Snowflakes queries and manage Performance Keeping up to date with emerging trends and technologies in data warehousing and data management Good working knowledge of Linux operating system Working experience on GIT and other repository management solutions Good knowledge of monitoring tools like Dynatrace, Splunk Serve as a technical leader for Snowflakes based projects, ensuring alignment with business goals and timelines Provide mentorship and guidance to team members in Snowflakes implementation, performance tuning and data management Collaborate with stakeholders to define and prioritize data warehousing initiatives and roadmaps. Act as point of contact for Snowflakes related queries, issues and initiatives What you will need to have: Must have 8 to 10 years of experience in data management tools like Snowflakes, Streamsets, Informatica Should have experience on monitoring tools like Dynatrace, Splunk. Should have experience on Kubernetes cluster management CloudWatch for monitoring and logging and Linux OS experience Ability to track progress against assigned tasks, report status, and proactively identifies issues. Demonstrate the ability to present information effectively in communications with peers and project management team. Highly Organized and works well in a fast paced, fluid and dynamic environment. What would be great to have: Experience in EKS for managing Kubernetes cluster Containerization technologies such as Docker and Podman AWS CLI for command-line interactions CI/CD pipelines using Harness S3 for storage solutions and IAM for access management Banking and Financial Services experience Knowledge of software development Life cycle best practices Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

- 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) The Prime Data Engineering & Analytics (PDEA) team is seeking to hire passionate Data Engineers to build and manage the central petabyte-scale data infrastructure supporting the worldwide Prime business operations. At Amazon Prime, understanding customer data is paramount to our success in providing customers with relevant and enticing benefits such as fast free shipping, instant videos, streaming music and free Kindle books in the US and international markets. At Amazon you will be working in one of the world's largest and most complex data environments. You will be part of team that will work with the marketing, retail, finance, analytics, machine learning and technology teams to provide real time data processing solution that give Amazon leadership, marketers, PMs timely, flexible and structured access to customer insights. The team will be responsible for building this platform end to end using latest AWS technologies and software development principles. As a Data Engineer, you will be responsible for leading the architecture, design and development of the data, metrics and reporting platform for Prime. You will architect and implement new and automated Business Intelligence solutions, including big data and new analytical capabilities that support our Development Engineers, Analysts and Retail business stakeholders with timely, actionable data, metrics and reports while satisfying scalability, reliability, accuracy, performance and budget goals and driving automation and operational efficiencies. You will partner with business leaders to drive strategy and prioritize projects and feature sets. You will also write and review business cases and drive the development process from design to release. In addition, you will provide technical leadership and mentoring for a team of highly capable Data Engineers. Responsibilities 1. Own design and execution of end to end projects 2. Own managing WW Prime core services data infrastructure 3. Establish key relationships which span Amazon business units and Business Intelligence teams 4. Implement standardized, automated operational and quality control processes to deliver accurate and timely data and reporting to meet or exceed SLAs Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Data Testing Engineer Exp : 8+ years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Job Description : Develop, maintain, and execute test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. ● Test ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules ● Perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues. ● Develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts. ● Conduct testing in cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Snowflake), ensuring performance and scalability. ● Familiarity with ETL testing tools and frameworks (e.g., Informatica, Talend, dbt). ● Experience with scripting languages to automate data testing. ● Familiarity with data visualization tools like Tableau, Power BI, or Looker Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Senior Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Role : Data : Jaipur/Indore Notice : Immediate joiners only Basic Responsibilities (Must-Haves) 5+ years of experience in dashboard story development, dashboard creation, and data engineering pipelines. Hands-on experience with log analytics, user engagement metrics, and product performance metrics. Ability to identify patterns, trends, and anomalies in log data to generate actionable insights for product enhancements and feature optimization. Collaborate with cross-functional teams to gather business requirements and translate them into functional and technical specifications. Manage and organize large volumes of application log data using Google Big Query. Design and develop interactive dashboards to visualize key metrics and insights using any of the tool like Tableau Power BI, or ThoughtSpot AI . Create intuitive, impactful visualizations to communicate findings to teams including customer success and leadership. Ensure data integrity, consistency, and accessibility for analytical purposes. Analyse application logs to extract metrics and statistics related to product performance, customer behaviour, and user sentiment. Work closely with product teams to understand log data generated by Python-based applications. Collaborate with stakeholders to define key performance indicators (KPIs) and success metrics. Can optimize data pipelines and storage in Big Query. Strong communication and teamwork skills. Ability to learn quickly and adapt to new technologies. Excellent problem-solving skills. Preferred Responsibilities (Nice-to-Haves) Knowledge of Generative AI (GenAI) and LLM-based solutions. Experience in designing and developing dashboards using ThoughtSpot AI. Good exposure to Google Cloud Platform (GCP). Data engineering experience with modern data warehouse architectures. Additional Responsibilities Participate in the development of proof-of-concepts (POCs) and pilot projects. Ability to articulate ideas and points of view clearly to the team. Take ownership of data analytics and data engineering solutions. Additional Nice-to-Haves Experience working with large datasets and distributed data processing tools such as Apache Spark or Hadoop. Familiarity with Agile development methodologies and version control systems like Git. Familiarity with ETL tools such as Informatica or Azure Data Factory (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Role: Data QA Lead Experience Required- 8+ Years Location- India/Remote Company Overview At Codvo.ai, software and people transformations go hand-in-hand. We are a global empathy-led technology services company. Product innovation and mature software engineering are part of our core DNA. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day. We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results. The Data Quality Analyst is responsible for ensuring the quality, accuracy, and consistency of data within the Customer and Loan Master Data API solution. This role will work closely with data owners, data modelers, and developers to identify and resolve data quality issues. Key Responsibilities Lead and manage end-to-end ETL/data validation activities. Design test strategy, plans, and scenarios for source-to-target validation. Build automated data validation frameworks (SQL/Python/Great Expectations). Integrate tests with CI/CD pipelines (Jenkins, Azure DevOps). Perform data integrity, transformation logic, and reconciliation checks. Collaborate with Data Engineering, Product, and DevOps teams. Drive test metrics reporting, defect triage, and root cause analysis. Mentor QA team members and ensure process adherence. Must-Have Skills 8+ years in QA with 4+ years in ETL testing. Strong SQL and database testing experience. Proficiency with ETL tools (Airbyte, DBT, Informatica, etc.). Automation using Python or similar scripting language. Solid understanding of data warehousing, SCD, deduplication. Experience with large datasets and structured/unstructured formats. Preferred Skills Knowledge of data orchestration tools (Prefect, Airflow). Familiarity with data quality/observability tools. Experience with big data systems (Spark, Hive). Hands-on with test data generation (Faker, Mockaroo). Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

Job Description: Role: Orcale, Informatica, PLSQL, ETL Location: Chennai/ Bangalore Experience: 5+ Years Must have: Orcale, Informatica, PLSQL, ETL Looking for a candidate with expertise on Oracle Database, Snaplogic and Oracle PL/SQL with knowledge on AWS cloud. The Value You Deliver: As a Software engineer, You build quality solutions that align with the technology blueprint and best practices to solve business problems by driving design, development, and ongoing support. You develop Oracle PL/SQL Stored Procedures for business functionality You develop Snaplogic pipelines for the integration and processing across the application multiple data stores You actively participate in release planning, daily stand up as well as working & helping team with tactical activities like code review, performance tuning, bug fix, design optimization etc. You understand key design aspects on Performance, Scalability, Resiliency, Security, and High Availability, etc. and follow recommended Engineering Practices. The Skills that are Key to this role: You have strong Design/Development skills in SQL and Oracle PL/SQL that includes performance tuning, writing packages, stored procedures, troubleshooting skills etc. You have strong Snaplogic Design/Development skills You have proven design thinking and solutioning ability to provide optimal solutions for complex data transformation usecases You have deep knowledge in Oracle database concepts and implementation experience You have solid experience in writing complex SQL queries on Oracle RDS and performance optimization for large data volumes with experience performing deep data analysis on multiple database platforms You have Hands-on experience building and deploying applications using a variety of Amazon Web Services (primarily database services) You are an excellent communicator with both technical and non-technical players to ensure common understanding of design to streamline and optimize data enablement. You have prior experience working in Agile software development environments, with proven ability to convert user stories into delivery work that provides incremental, iterative delivery of business value. You need to bring passion to your work and operate with a sense of purpose that inspires and motivates those you work with. You are expected to be intellectually curious, take initiative, and love learning new skills and capabilities. You should be able to switch focus based on changing priorities within fast paced team The Expertise We are Looking For: Minimum 5 years of Relevant experience bringing a data-driven approach to strategic business decision-making Experience with Oracle Database, SQL, Snaplogic, Oracle PL/SQL, Informatica, CTRL M, Scripting (PowerShell, python etc.) Certification in AWS, Snaplogic is a great value add Ability to generate meaningful insights through data analytics and research and to present complex data, financial analyses, statistics, and research findings in a simple, clear and actionable way Excellent communication skills and ability to interact with all levels of end users and technical resources Bachelors or Masters degree in computer science, or Information Technology The Skills that are Good to Have for this role: Having knowledge in Aerospike, PowerBI and visualization tools like Qlik/Alteryx is preferred Exposure to Finance and Market is preferred Ability to perform independent technical analysis on complex projects. Location for This Role : Chennai/ Bangalore Shift timings : 11:00 am - 8:00pm (these are the official working hours so as to enable overlap with the US; that said, associates can exercise flexi logins/logouts from office and work remote for calls/meetings with the US)

Posted 1 week ago

Apply

Exploring Informatica Jobs in India

The informatica job market in India is thriving with numerous opportunities for skilled professionals in this field. Companies across various industries are actively hiring informatica experts to manage and optimize their data integration and data quality processes.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for informatica professionals in India varies based on experience and expertise: - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum

Career Path

A typical career progression in the informatica field may include roles such as: - Junior Developer - Informatica Developer - Senior Developer - Informatica Tech Lead - Informatica Architect

Related Skills

In addition to informatica expertise, professionals in this field are often expected to have skills in: - SQL - Data warehousing - ETL tools - Data modeling - Data analysis

Interview Questions

  • What is Informatica and why is it used? (basic)
  • Explain the difference between a connected and unconnected lookup transformation. (medium)
  • How can you improve the performance of a session in Informatica? (medium)
  • What are the various types of cache in Informatica? (medium)
  • How do you handle rejected rows in Informatica? (basic)
  • What is a reusable transformation in Informatica? (basic)
  • Explain the difference between a filter and router transformation in Informatica. (medium)
  • What is a workflow in Informatica? (basic)
  • How do you handle slowly changing dimensions in Informatica? (advanced)
  • What is a mapplet in Informatica? (medium)
  • Explain the difference between an aggregator and joiner transformation in Informatica. (medium)
  • How do you create a mapping parameter in Informatica? (basic)
  • What is a session and a workflow in Informatica? (basic)
  • What is a rank transformation in Informatica and how is it used? (medium)
  • How do you debug a mapping in Informatica? (medium)
  • Explain the difference between static and dynamic cache in Informatica. (advanced)
  • What is a sequence generator transformation in Informatica? (basic)
  • How do you handle null values in Informatica? (basic)
  • Explain the difference between a mapping and mapplet in Informatica. (basic)
  • What are the various types of transformations in Informatica? (basic)
  • How do you implement partitioning in Informatica? (medium)
  • Explain the concept of pushdown optimization in Informatica. (advanced)
  • How do you create a session in Informatica? (basic)
  • What is a source qualifier transformation in Informatica? (basic)
  • How do you handle exceptions in Informatica? (medium)

Closing Remark

As you prepare for informatica job opportunities in India, make sure to enhance your skills, stay updated with the latest trends in data integration, and approach interviews with confidence. With the right knowledge and expertise, you can excel in the informatica field and secure rewarding career opportunities. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies