Home
Jobs
Companies
Resume

804 Talend Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Very good experience on Continuous Flow Graph tool used for point based development. Design, develop, and maintain ETL processes using Ab Initio tools. Write, test, and deploy Ab Initio graphs, scripts, and other necessary components. Troubleshoot and resolve data processing issues and improve performance Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Over all 8 Years and Relevant 5+ years Extract, transform, and load data from various sources into data warehouses, operational data stores, or other target systems. Work with different data formats, including structured, semi-structured, and unstructured data Preferred technical and professional experience Effective communication and presentation skills. Industry expertise / specialization

Posted 15 hours ago

Apply

8.0 years

28 - 30 Lacs

Hyderābād

On-site

Experience - 8+ Years Budget - 30 LPA (Including Variable Pay) Location - Bangalore, Hyderabad, Chennai (Hybrid) Shift Timing - 2 PM - 11 PM ETL Development Lead (8+ years) Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 17 hours ago

Apply

8.0 years

28 - 30 Lacs

Pune

On-site

Experience - 8+ Years Budget - 30 LPA (Including Variable Pay) Location - Bangalore, Hyderabad, Chennai (Hybrid) Shift Timing - 2 PM - 11 PM ETL Development Lead (8+ years) Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 17 hours ago

Apply

8.0 years

28 - 30 Lacs

Chennai

On-site

Experience - 8+ Years Budget - 30 LPA (Including Variable Pay) Location - Bangalore, Hyderabad, Chennai (Hybrid) Shift Timing - 2 PM - 11 PM ETL Development Lead (8+ years) Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 17 hours ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: Tech Lead – ETL Location: Offshore/India Who are we looking for? We are looking for 10+ Years of Tech Lead – ETL; multi skilled in different technology mentioned below. Tech Stack: Data Integration Talend, EMR, Airflow DataStage Hue Apache TomCat Python Visualization and Reporting Cognos PowerBI Tableau QlikView MicroStrategy Sigma Monitoring/Scheduler Tools App Dynamics Data Power Operational Dashboard (DPOD) Microsoft System Center Orchestrator (IR Batch Scheduler) AutoSys AWS CloudWatch Technical Skills: Strong knowledge with working experience in one from each category mentioned above. Key Responsibilities: Should have knowledge in extracting data from heterogeneous sources such as databases, CRM, flat files, etc. Create and design the ETL processes and Pipelines Ensure Data quality, modeling and tuning Strong troubleshooting and Problem Solving skills to identify the issues quickly; provide innovative tools for that. Collaboration: Working with developers, other administrators, and stakeholders to ensure smooth operations and integration. Continuous Improvement: Staying up to date with the latest industry trends and technologies to drive continuous improvement and innovation. DevOps and Agile Practices: Aligning middleware operations with DevOps and Agile principles and contributing to automation of middleware-related tasks. Qualification: · Education qualification: Any Graduate Show more Show less

Posted 17 hours ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers’ digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. Amex offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. How will you make an impact in this role? Build NextGen Data Strategy, Data Virtualization, Data Lakes Warehousing Transform and improve performance of existing reporting & analytics use cases with more efficient and state of the art data engineering solutions. Analytics Development to realize advanced analytics vision and strategy in a scalable, iterative manner. Deliver software that provides superior user experiences, linking customer needs and business drivers together through innovative product engineering. Cultivate an environment of Engineering excellence and continuous improvement, leading changes that drive efficiencies into existing Engineering and delivery processes. Own accountability for all quality aspects and metrics of product portfolio, including system performance, platform availability, operational efficiency, risk management, information security, data management and cost effectiveness. Work with key stakeholders to drive Software solutions that align to strategic roadmaps, prioritized initiatives and strategic Technology directions. Work with peers, staff engineers and staff architects to assimilate new technology and delivery methods into scalable software solutions. Minimum Qualifications: Bachelor’s degree in computer science, Computer Science Engineering, or related field required; Advanced Degree preferred. 5+ years of hands-on experience in implementing large data-warehousing projects, strong knowledge of latest NextGen BI & Data Strategy & BI Tools Proven experience in Business Intelligence, Reporting on large datasets, Data Virtualization Tools, Big Data, GCP, JAVA, Microservices Strong systems integration architecture skills and a high degree of technical expertise, ranging across a number of technologies with a proven track record of turning new technologies into business solutions. Should be good in one programming language python/Java. Should have good understanding of data structures. GCP /cloud knowledge has added advantage. PowerBI, Tableau and looker good knowledge and understanding. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. Experience managing in a fast paced, complex, and dynamic global environment. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. Preferred Qualifications: Bachelor’s degree in computer science, Computer Science Engineering, or related field required; Advanced Degree preferred. 5+ years of hands-on experience in implementing large data-warehousing projects, strong knowledge of latest NextGen BI & Data Strategy & BI Tools Proven experience in Business Intelligence, Reporting on large datasets, Oracle Business Intelligence (OBIEE), Tableau, MicroStrategy, Data Virtualization Tools, Oracle PL/SQL, Informatica, Other ETL Tools like Talend, Java Should be good in one programming language python/Java. Should be good data structures and reasoning. GCP knowledge has added advantage or cloud knowledge. PowerBI, Tableau and looker good knowledge and understanding. Strong systems integration architecture skills and a high degree of technical expertise, ranging across several technologies with a proven track record of turning new technologies into business solutions. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less

Posted 17 hours ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Overview: The person will be responsible for expanding and optimizing our data and data pipeline architecture. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up. You’ll be Responsible for ? Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Cloud technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. You’d have? We are looking for a candidate with 3+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with data pipeline and workflow management tools: Apache Airflow, NiFi, Talend etc. • Experience with relational SQL and NoSQL databases, including Clickhouse, Postgres and MySQL. Experience with stream-processing systems: Storm, Spark-Streaming, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Scala, etc. Experience building and optimizing data pipelines, architectures and data sets. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable data stores Why Join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com Show more Show less

Posted 17 hours ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Title - QA Manual Testing Experience - 5-8 Years Location - Pune & Gurgaon (Hybrid) Key Responsibilities: Understand business requirements and data flows to create comprehensive test plans and test cases for ETL jobs. Perform data validation and reconciliation between source systems, staging, and target data stores (DWH, data lakes, etc.). Develop and execute automated and manual tests to ensure data accuracy and quality. Work with SQL queries to validate data transformations and detect anomalies. Identify, document, and track defects and inconsistencies in data processing. Collaborate with data engineering and BI teams to improve ETL processes and data pipelines. Maintain QA documentation and contribute to continuous process improvements. Must Have Skills: Strong SQL skills – ability to write complex queries for data validation and transformation testing. Hands-on experience in ETL testing – validating data pipelines, transformations, and data loads. Knowledge of data warehousing concepts – dimensions, facts, slowly changing dimensions (SCD), etc. Experience in test case design, execution, and defect tracking . Experience with QA tools like JIRA , TestRail , or equivalent. Ability to work independently and collaboratively in an Agile/Scrum environment. Good to Have Skills: Experience with ETL tools like Informatica, Talend, DataStage , or Azure/AWS/GCP native ETL services (e.g., Dataflow, Glue). Knowledge of automation frameworks using Python/Selenium/pytest or similar tools for data testing. Familiarity with cloud data platforms – Snowflake, BigQuery, Redshift, etc. Basic understanding of CI/CD pipelines and QA integration. Exposure to data quality tools such as Great Expectations , Deequ , or DQ frameworks . Understanding of reporting/BI tools such as Power BI, Tableau, or Looker. Educational Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. Show more Show less

Posted 18 hours ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Java +Talend +Collibra - Manager As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance The opportunity We’re looking for Manager with strong technology and data understanding having proven delivery capability and client experience. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Knowledge and skills requirements: 8-11 years of professional data-related experience 8+ years of Java/Python/UNIX development experience Knowledge of SQL and experience working with large data volumes Experience in Data Integration/Data catalog/Data Controls/Data Analytics, related tools, and frameworks. Ability to learn project or product being developed quickly. Ability to write documentation and procedure for various audiences. Solid organizational skills including attention to detail and multi-tasking skills. Serving as a point of contact for teams when multiple squads are assigned manuals to the same project/deliverable to ensure team actions remain in synergy. Communicating with business lines, build squads, management to keep the project aligned with their goals. Provide project updates on a consistent basis to various stakeholders about strategy, adjustments, and progress. Excellent time management skills with a proven ability to meet deadlines. Excellent verbal and written communication skills. Excellent interpersonal and customer service skills. Strong understanding of agile development methodologies Additional experience in following areas would be nice to have: Working experience/Knowledge with Data Integration tools is a plus. Working experience/Knowledge with Collibra / Data catalog or any other Metadata Tool is a plus. Working experience/Knowledge with Snowflake is a plus. Working experience/Knowledge with Data Analytics & Reporting Tools (Power BI, QuickSight or any other BI tool) is a plus. Working experience/Knowledge with Data Controls / Data Quality Tools (Ataccama or any other DQ tool) is a plus. What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 18 hours ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Data Testing Engineer Exp : 8+ years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Job Description : Develop, maintain, and execute test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. ● Test ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules ● Perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues. ● Develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts. ● Conduct testing in cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Snowflake), ensuring performance and scalability. ● Familiarity with ETL testing tools and frameworks (e.g., Informatica, Talend, dbt). ● Experience with scripting languages to automate data testing. ● Familiarity with data visualization tools like Tableau, Power BI, or Looker Show more Show less

Posted 19 hours ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Summary: We are seeking a highly skilled Lead Data Engineer/Associate Architect to lead the design, implementation, and optimization of scalable data architectures. The ideal candidate will have a deep understanding of data modeling, ETL processes, cloud data solutions, and big data technologies. You will work closely with cross-functional teams to build robust, high-performance data pipelines and infrastructure to enable data-driven decision-making. Experience: 7 - 12 years Work Location: Hyderabad (Hybrid) / Remote Mandatory skills: AWS, Python, SQL, Airflow, DBT Must have done 1 or 2 projects in Clinical Domain/Clinical Industry. Responsibilities: Design and Develop scalable and resilient data architectures that support business needs, analytics, and AI/ML workloads. Data Pipeline Development: Design and implement robust ETL/ELT processes to ensure efficient data ingestion, transformation, and storage. Big Data & Cloud Solutions: Architect data solutions using cloud platforms like AWS, Azure, or GCP, leveraging services such as Snowflake, Redshift, BigQuery, and Databricks. Database Optimization: Ensure performance tuning, indexing strategies, and query optimization for relational and NoSQL databases. Data Governance & Security: Implement best practices for data quality, metadata management, compliance (GDPR, CCPA), and security. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to translate business requirements into scalable solutions. Technology Evaluation: Stay updated with emerging trends, assess new tools and frameworks, and drive innovation in data engineering. Required Skills: Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience: 7 - 12+ years of experience in data engineering Cloud Platforms: Strong expertise in AWS data services. Databases: Hands-on experience with SQL, NoSQL, and columnar databases such as PostgreSQL, MongoDB, Cassandra, and Snowflake. Programming: Proficiency in Python, Scala, or Java for data processing and automation. ETL Tools: Experience with tools like Apache Airflow, Talend, DBT, or Informatica. Machine Learning & AI Integration (Preferred): Understanding of how to architect data solutions for AI/ML applications Show more Show less

Posted 20 hours ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Engineer Location: Remote Experience :5+Years Job Summary: We are seeking a highly skilled Data Engineer with strong experience in ETL development, data replication, and cloud data integration to join our remote team. The ideal candidate will be proficient in Talend , have hands-on experience with IBM Data Replicator and Qlik Replicate , and demonstrate deep knowledge of Snowflake architecture, CDC processes, and data transformation scripting. Key Responsibilities: Design, develop, and maintain robust ETL pipelines using Talend integrated with Snowflake . Implement and manage real-time data replication solutions using IBM Data Replicator and Qlik Replicate . Work with complex data source systems including DB2 (containerized and traditional) , Oracle , and Hadoop . Model and manage slowly changing dimensions ( Type 2 SCD ) in Snowflake. Optimize data pipelines for scalability, reliability, and performance. Design and implement Change Data Capture (CDC) strategies to support real-time and incremental data flows. Write efficient and maintainable code in SQL , Python , or Shell to support data transformations and automation. Collaborate with data architects, analysts, and other engineers to support data-driven initiatives. Required Skills & Qualifications: Strong proficiency in Talend ETL development and integration with Snowflake . Practical experience with IBM Data Replicator and Qlik Replicate . In-depth understanding of Snowflake architecture and Type 2 SCD data modeling. Familiarity with containerized environments and various data sources such as DB2 , Oracle , and Hadoop . Experience implementing CDC and real-time data replication patterns. Proficiency in SQL , Python , and Shell scripting . Excellent problem-solving and communication skills. Self-motivated and comfortable working independently in a fully remote environment. Preferred Qualifications: Snowflake certification or Talend certification. Experience working in an Agile or DevOps environment. Familiarity with data governance and data quality best practices. Show more Show less

Posted 22 hours ago

Apply

10.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

We are looking for a Senior Data Lead to lead enterprise-level data modernization and innovation. In this highly strategic role, you will design scalable, secure, and future-ready data architectures, modernize legacy systems, and provide trusted technical leadership across both technology and business teams. This is a unique opportunity to make a company-wide impact by influencing data strategy and enabling smarter, faster decision-making through data. Key Responsibilities Architect & Design: Lead the development of robust, scalable data models, data management systems, and integration frameworks to ensure enterprise-wide data accuracy, consistency, and security. Domain Expertise: Act as a subject matter expert across key business functions such as Supply Chain, Product Engineering, Sales & Marketing, Manufacturing, Finance, and Legal. Modernization Leadership: Drive the transformation of legacy systems and manage end-to-end cloud migrations with minimal business disruption. Collaboration: Partner with data engineers, scientists, analysts, and IT leaders to build high-performance, scalable data pipelines and transformation solutions. Governance & Compliance: Establish and maintain data governance frameworks including metadata repositories, data dictionaries, and data lineage documentation. Strategic Advisory: Provide guidance on data architecture best practices, technology selection, and roadmap alignment to senior leadership and cross-functional teams. Mentorship: Serve as a mentor and thought leader to junior data professionals, fostering a culture of innovation, knowledge sharing, and technical excellence. Innovation & Trends: Stay abreast of emerging technologies in cloud, data platforms, and AI/ML to identify and implement innovative solutions. Communication: Translate complex technical concepts into clear, actionable insights for technical and non-technical audiences alike. Required Qualifications 10+ years of experience in data architecture, engineering, or enterprise data management roles. Demonstrated success leading large-scale data initiatives in life sciences or other highly regulated industries. Deep expertise in modern data architecture paradigms such as Data Lakehouse, Data Mesh, or Data Fabric. Strong hands-on experience with cloud platforms like AWS, Azure, or Google Cloud Platform (GCP). Proficiency in data modeling, ETL/ELT frameworks, and enterprise integration patterns. Deep understanding of data governance, metadata management, master data management (MDM), and data quality practices. Experience with tools and platforms including but not limited to: Data Integration: Informatica, Talend Data Governance: Collibra Modeling/Transformation: dbt Cloud Platforms: Snowflake, Databricks Excellent problem-solving skills with the ability to translate business requirements into scalable data solutions. Exceptional communication skills and experience engaging with both executive stakeholders and engineering teams. Preferred Qualifications (Nice to Have) Experience with AI/ML data pipelines or real-time streaming architectures. Certifications in cloud technologies (e.g., AWS Certified Solutions Architect, Azure Data Engineer). Familiarity with regulatory frameworks such as GxP, HIPAA, or GDPR. Show more Show less

Posted 23 hours ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 23 hours ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Job Summary Customer is seeking a highly skilled Data Engineer with FME expertise who are local residents of Gurugram or Bengaluru or Nagpur. Job Responsibilities 1. Data Integration & Transformation · FME (Safe Software)- Build ETL pipelines to read from Idox/CCF, transform data to given schema · FME Custom Transformers Create reusable rule logic for validations and fixes · Python (in FME or standalone)- Write custom data fix logic, date parsers, validation scripts · Data Profiling Tools-Understand completeness, accuracy, and consistency in batches 2. Spatial Data Handling · PostgreSQL/PostGIS- Store and query spatial data; support dashboard analytics · GeoPackage, GML, GeoJSON, Shapefile- Understand source file formats for ingest/export · Geometry Validators & Fixers- Fix overlaps, slivers, invalid polygons using FME or SQL · Coordinate Systems (e.g., EPSG:27700)- Ensure correct projections and alignment with target systems 3. Automation & Data Workflow Orchestration · FME Server / FME Cloud-Automate batch runs, monitor ETL pipelines · CI/CD / Cron Jobs / Python Scheduling-Trigger ingestion + dashboard refreshes on file upload · Audit Trail & Logging- Log data issues, rule hits, and processing history 4. Dashboard Integration Support · SQL for Views & Aggregations-Support dashboards showing issue counts, trends, maps · Power BI / Grafana / Superset (optional)- Assist in exposing dashboard metrics · Metadata Management- Tag each batch, status, record counts, processing stage 5. Collaborative & Communication Skills · Interpreting Validation Reports- Communicate dashboard findings to Ops and Analysts · Business Rule Translation- Convert requirements into FME transformers or SQL rules · Working with LA and HMLR Specs- Map internal formats to official schemas accurately Essential Skills · Build and maintain FME workflows to transform source data to target data specs · Validate textual and spatial fields using logic embedded in FME or SQL · Support issue triaging and reporting via dashboards · Collaborate with data provider, Analysts, and Ops for continuous improvement · ETL / Integration FME, Talend (optional), Python · Spatial DB PostGIS, Oracle Spatial · GIS Tools QGIS, ArcGIS · Scripting Python, SQL · Validation FME Testers, AttributeValidator, custom SQL views · Format Support CSV, JSON, GPKG, XML, Shapefiles · Coordination Jira, Confluence, Git (for rule versioning) Background Check required No criminal record Others · Bachelor of Engineering - Bachelor of Technology (B.E./B.Tech.) · Work Location- Onsite in Gurugram or Bengaluru or Nagpur · Only local candidates apply Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

On-site

Hiring: Software Support Engineer (ETL & Healthcare Data) Location: Hyderabad Experience: 5+ Years | Full-Time We are looking for a highly skilled Software Support Engineer with strong experience in supporting and troubleshooting ETL pipelines, production issues, and healthcare data systems. If you're passionate about optimizing systems, solving technical challenges, and collaborating with cross-functional teams — we’d love to connect! Key Responsibilities: Provide L2/L3 support for ETL jobs and data pipelines Troubleshoot production issues and perform root cause analysis Automate and monitor workflows using Python, Shell scripting, Talend, and Airflow Collaborate with developers, analysts, and QA for issue resolution Document processes, job flows, and deployment steps Support healthcare data transactions (EDI 834, 837, 999, etc.) Tech Stack & Skills: Python, C#, Shell scripting SQL & NoSQL (MongoDB) Airflow, Talend Studio, Redix Azure Service Bus / Kafka CI/CD (Azure Pipelines), Git/Bitbucket JIRA, Agile methodology Nice to Have: Knowledge of Facets, HealthRules Payor Experience with cloud environments Exposure to monitoring tools Job Type: Full-time Shift: Evening shift Work Days: Monday to Friday Work Location: In person

Posted 1 day ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 2 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Data Testing Engineer Exp : 8+ years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Job Description : Develop, maintain, and execute test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. ● Test ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules ● Perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues. ● Develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts. ● Conduct testing in cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Snowflake), ensuring performance and scalability. ● Familiarity with ETL testing tools and frameworks (e.g., Informatica, Talend, dbt). ● Experience with scripting languages to automate data testing. ● Familiarity with data visualization tools like Tableau, Power BI, or Looker Show more Show less

Posted 2 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Description Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We're committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a thriving internal community and are committed to creating a workplace that looks like the world that we serve. Pay And Benefits Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits Pension Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact You Will Have In This Role The Development family is responsible for crafting, designing, deploying, and supporting applications, programs, and software solutions. May include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities related to software products used internally or externally on product platforms supported by the firm. The software development process requires in-depth domain expertise in existing and emerging development methodologies, tools, and programming languages. Software Developers work closely with business partners and / or external clients in defining requirements and implementing solutions. The Application Support Engineering role specializes in maintaining and providing technical support for all applications that are beyond the development stage and are running in the daily operations of the firm. Works closely with development teams, infrastructure partners, and internal / external clients to bring up and resolve technical support incidents. Your Primary Responsibilities Verify analysis performed by team members and implement changes required to prevent reoccurance of incidents Resolve Critical application alerts in a timely fashion including production defects, providing business impact and analysis to teams, handling minor enhancements as needed Review and update knowledge articles and runbooks with application development teams to confirm information is up to date Collaborate with internal teams to provide answers to application issues and escalate to as needed Validate and submit responses to requests for information from onging audits NOTE: The Primary Responsibilities of this role are not limited to the details above. Qualifications Minimum 8+ years of related experience Bachelor's degree (preferred) or equivalent experience Talents Needed For Success Minimum of 8+ years of related experience in Application Support. Bachelor's degree preferred or equivalent experience. Solid Experience in Application Support. Hands on experience in Unix, Linux, Windows, SQL/PLSQL Familiarity working with relational databases (DB2, Oracle, Snowflake) Monitoring and Data Tools experience (Splunk, DynaTrace, Thousand Eyes, Grafana, Selenium, HiPam IBM Zolda) Cloud Technologies (AWS services (S3, EC2, Lambda, SQS, IAM roles), Azure, OpenShift, RDS Aurora, Postgress) Scheduling Tool experience (CA AutoSys, Control-M) Scripting languages (Bash, Python, Ruby, Shell, Perl, JavaScript) Hands on experience with ETL tools (Informatica Datahub/IDQ, Talend) Strong problem-solving skills with the ability to think creatively. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

Remote

Linkedin logo

This job is with Hitachi Digital Services, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Our Company We're Hitachi Digital, a company at the forefront of digital transformation and the fastest growing division of Hitachi Group. We're crucial to the company's strategy and ambition to become a premier global player in the massive and fast-moving digital transformation market. Our group companies, including GlobalLogic, Hitachi Digital Services, Hitachi Vantara and more, offer comprehensive services that span the entire digital lifecycle, from initial idea to full-scale operation and the infrastructure to run it on. Hitachi Digital represents One Hitachi, integrating domain knowledge and digital capabilities, and harnessing the power of the entire portfolio of services, technologies, and partnerships, to accelerate synergy creation and make real-world impact for our customers and society as a whole. Imagine the sheer breadth of talent it takes to unleash a digital future. We don't expect you to 'fit' every requirement - your life experience, character, perspective, and passion for achieving great things in the world are equally as important to us. Preferred job location: Bengaluru, Hyderabad, Pune, New Delhi or Remote The team Hitachi Digital is a leader in digital transformation, leveraging advanced AI and data technologies to drive innovation and efficiency across various operational companies (OpCos) and departments. We are seeking a highly experienced Lead Data Integration Architect to join our dynamic team and contribute to the development of robust data integration solutions. The role Lead the design, development, and implementation of data integration solutions using SnapLogic, MuleSoft, or Pentaho. Develop and optimize data integration workflows and pipelines. Collaborate with cross-functional teams to integrate data solutions into existing systems and workflows. Implement and integrate VectorAI and Agent Workspace for Google Gemini into data solutions. Conduct research and stay updated on the latest advancements in data integration technologies. Troubleshoot and resolve complex issues related to data integration systems and applications. Document development processes, methodologies, and best practices. Mentor junior developers and participate in code reviews, providing constructive feedback to team members. Provide strategic direction and leadership in data integration and technology adoption. What you'll bring Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. 10+ years of experience in data integration, preferably in the Banking or Finance industry. Extensive experience with SnapLogic, MuleSoft, or Pentaho (at least one is a must). Experience with Talend and Alation is a plus. Strong programming skills in languages such as Python, Java, or SQL. Technical proficiency in data integration tools and platforms. Knowledge of cloud platforms, particularly Google Cloud Platform (GCP). Experience with VectorAI and Agent Workspace for Google Gemini. Comprehensive knowledge of financial products, regulatory reporting, credit risk, and counterparty risk. Prior strategy consulting experience with a focus on change management and program delivery preferred. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to convey complex technical concepts to non-technical stakeholders. Proven leadership skills and experience in guiding development projects from conception to deployment. Preferred Qualifications Familiarity with data engineering tools and techniques. Previous experience in a similar role within a tech-driven company. About us We're a global, 1000-strong, diverse team of professional experts, promoting and delivering Social Innovation through our One Hitachi initiative (OT x IT x Product) and working on projects that have a real-world impact. We're curious, passionate and empowered, blending our legacy of 110 years of innovation with our shaping our future. Here you're not just another employee; you're part of a tradition of excellence and a community working towards creating a digital future. Championing diversity, equity, and inclusion Diversity, equity, and inclusion (DEI) are integral to our culture and identity. Diverse thinking, a commitment to allyship, and a culture of empowerment help us achieve powerful results. We want you to be you, with all the ideas, lived experience, and fresh perspective that brings. We support your uniqueness and encourage people from all backgrounds to apply and realize their full potential as part of our team. How We Look After You We help take care of your today and tomorrow with industry-leading benefits, support, and services that look after your holistic health and wellbeing. We're also champions of life balance and offer flexible arrangements that work for you (role and location dependent). We're always looking for new ways of working that bring out our best, which leads to unexpected ideas. So here, you'll experience a sense of belonging, and discover autonomy, freedom, and ownership as you work alongside talented people you enjoy sharing knowledge with. We're proud to say we're an equal opportunity employer and welcome all applicants for employment without attention to race, colour, religion, sex, sexual orientation, gender identity, national origin, veteran, age, disability status or any other protected characteristic. Should you need reasonable accommodations during the recruitment process, please let us know so that we can do our best to set you up for success. Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

Experience : 3 to 5 years of experience in ETL development and data engineering Job Location : Noida, India Position Overview: Looking for a Data Engineer specializing in ETL processes to design and implement data pipelines that support data integration and analytics initiatives. Key Responsibilities: Design, develop, and maintain ETL pipelines to extract, transform, and load data from various sources. Collaborate with data analysts and stakeholders to understand data requirements. Ensure data quality and integrity through validation and cleansing processes. Optimize ETL processes for performance and scalability. Document data workflows and maintain metadata repositories. Qualifications: Bachelor’s degree in Computer Science, Information Systems, or related field. 3+ years of experience in ETL development and data engineering. Proficiency in SQL and experience with ETL tools like Informatica, Talend, or SSIS. Familiarity with data warehousing concepts and cloud platforms such as AWS or Azure. Strong problem-solving skills and attention to detail If you’re passionate about customer success and eager to make an impact in a dynamic organization, we’d love to hear from you! We are an equal opportunity employer Demand AI was formed through the strategic merger of Consilior AI and B2B Matters, bringing together advanced AI-driven data solutions and high-performance demand generation expertise under one unified brand. At the core of Demand AI is a uniquely skilled team with deep roots in the lead generation industry. Before joining forces at Consilior AI and B2B Matters, our founding team worked together at Selling Simplified Group, where they built enduring client relationships and consistently delivered measurable growth. This shared history gives us a deep understanding of what clients need and how to deliver it with precision. Our solutions are powered by a proprietary, first-party global database of over 73 million contacts, enabling us to provide smarter, faster, and more connected customer acquisition for modern sales and marketing teams. We believe in fostering a diverse, inclusive, and collaborative workplace. If you’re passionate about transforming how businesses grow through intelligent data and AI, we’d love to hear from you. Learn more at https://demandai.net How to Apply Please send your resume, portfolio (if available), and a brief cover letter outlining your experience to jobs-dev@demandai.co or apply through our careers page at https://demandai.net/career/ Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

I am thrilled to share an exciting opportunity with one of our esteemed clients! 🚀 Join me in exploring new horizons and unlocking potential. If you're ready for a challenge and growth,. Exp: 7yrs Location: Chennai/Hyderabad/Coimbatore immediate to 30days JD: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 4+ years of experience in-depth data engineering, with at least 1+ minimum year(s) of dedicated experience engineering solutions in an enterprise scale Snowflake environment. Tactical expertise in ANSI SQL, performance tuning, and data modeling techniques . Strong experience with cloud platforms (preference to Azure) and their data services. Experience in ETL/ELT development using tools such as Azure Data Factory, dbt, Matillion, Talend, or Fivetran. Hands-on experience with scripting languages like Python for data processing. Snowflake SnowPro certification ; preference to the engineering course path. Experience with CI/CD pipelines, DevOps practices, and Infrastructure as Code (IaC). Knowledge of streaming data processing frameworks such as Apache Kafka or Spark Streaming. Familiarity with BI and visualization tools such as PowerBI Regards R Usha usha@livecjobs.com Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities We are seeking a skilled and experienced Cognos TM1 Developer with a strong background in ETL processes and Python development. The ideal candidate will be responsible for designing, developing, and supporting TM1 solutions, integrating data pipelines, and automating processes using Python. This role requires strong problem-solving skills, business acumen, and the ability to work collaboratively with cross-functional teams Preferred Education Master's Degree Required Technical And Professional Expertise 4+ years of hands-on experience with IBM Cognos TM1 / Planning Analytics. Strong knowledge of TI processes, rules, dimensions, cubes, and TM1 Web. Proven experience in building and managing ETL pipelines (preferably with tools like Informatica, Talend, or custom scripts). Proficiency in Python programming for automation, data processing, and system integration. Experience with REST APIs, JSON/XML data formats, and data extraction from external sources Preferred Technical And Professional Experience strong SQL knowledge and ability to work with relational databases. Familiarity with Agile methodologies and version control systems (e.g., Git). 3.Excellent analytical, problem-solving, and communication skills Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Lead Data Engineer – C12 / Assistant Vice President (India) The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 8 to 12 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies