Home
Jobs

790 Talend Jobs - Page 23

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities Who you are: A highly skilled Data Engineer specializing in Data Modeling with experience in designing, implementing, and optimizing data structures that support the storage, retrieval and processing of data for large-scale enterprise environments. Having expertise in conceptual, logical, and physical data modeling, along with a deep understanding of ETL processes, data lake architectures, and modern data platforms. Proficient in ERwin, PostgreSQL, Apache Iceberg, Cloudera Data Platform, and Denodo. Possess ability to work with cross-functional teams, data architects, and business stakeholders ensures that data models align with enterprise data strategies and support analytical use cases effectively. What you’ll do: As a Data Engineer – Data Modeling, you will be responsible for: Data Modeling & Architecture Designing and developing conceptual, logical, and physical data models to support data migration from IIAS to Cloudera Data Lake. Creating and optimizing data models for structured, semi-structured, and unstructured data stored in Apache Iceberg tables on Cloudera. Establishing data lineage and metadata management for the new data platform. Implementing Denodo-based data virtualization models to ensure seamless data access across multiple sources. Data Governance & Quality Ensuring data integrity, consistency, and compliance with regulatory standards, including Banking/regulatory guidelines. Implementing Talend Data Quality (DQ) solutions to maintain high data accuracy. Defining and enforcing naming conventions, data definitions, and business rules for structured and semi-structured data. ETL & Data Pipeline Optimization Supporting the migration of ETL workflows from IBM DataStage to PySpark, ensuring models align with the new ingestion framework. Collaborating with data engineers to define schema evolution strategies for Iceberg tables. Ensuring performance optimization for large-scale data processing on Cloudera. Collaboration & Documentation Working closely with business analysts, architects, and developers to translate business requirements into scalable data models. Documenting data dictionary, entity relationships, and mapping specifications for data migration. Supporting reporting and analytics teams (Qlik Sense/Tableau) by providing well-structured data models. Preferred Education Master's Degree Required Technical And Professional Expertise Experience in Cloudera migration projects in the banking or financial sector. Knowledge of PySpark, Kafka, Airflow, and cloud-native data processing. Experience with Talend DQ for data quality monitoring. Preferred Technical And Professional Experience Experience in Cloudera migration projects in the banking or financial sector. Knowledge of PySpark, Kafka, Airflow, and cloud-native data processing. Experience with Talend DQ for data quality monitoring. Familiarity with graph databases (DGraph Enterprise) for data relationships. Experience with GitLab, Sonatype Nexus, and CheckMarx for CI/CD and security compliance. IBM, Cloudera, or AWS/GCP certifications in Data Engineering or Data Modeling. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities What you’ll do: As a Data Scientist – Artificial Intelligence, your responsibilities include: AI & Machine Learning Model Development Developing ML models for predictive analytics, fraud detection, and automation. Working with deep learning (DL) and Natural Language Processing (NLP) models for text and speech processing. Implementing AI-driven anomaly detection for data quality and governance. Big Data & Model Deployment Building and deploying ML models on Cloudera Machine Learning (CML). Utilizing Apache Spark and PySpark for processing large-scale datasets. Working with Kafka and Iceberg to integrate AI solutions into real-time data pipelines. Data Quality & Governance Supporting AI-powered data quality monitoring with Talend DQ. Assisting in metadata management, data lineage tracking, and automated data validation. Utilizing Denodo for AI-driven data virtualization and federated learning. Security & Compliance Ensuring AI models comply with Bank’s data security and governance policies. Supporting AI-driven encryption and anomaly detection techniques using Thales CipherTrust. Collaboration & Documentation Working with data engineers and analysts to develop AI solutions aligned with business needs. Documenting model architectures, experiment results, and optimization techniques. Assisting in AI-driven reporting and visualization using Qlik Sense/Tableau. Preferred Education Master's Degree Required Technical And Professional Expertise 4-7 years of experience in AI, ML, and Data Science. Strong programming skills in Python, SQL, and ML frameworks (TensorFlow, PyTorch, Scikit-learn). Hands-on experience with big data platforms (Cloudera, Apache Spark, Kafka, Iceberg). Experience with NLP, deep learning, and AI for automation. Understanding of data governance, metadata management, and AI-driven data quality. Familiarity with GitLab, Sonatype Nexus, and CheckMarx for AI model deployment. Preferred Technical And Professional Experience Experience with AI/ML solutions for Banking and financial services. Knowledge of cloud AI platforms (AWS SageMaker, Azure ML, GCP Vertex AI). Exposure to AI ethics, explainable AI (XAI), and bias detection in ML models. Understanding of graph databases (DGraph Enterprise) for AI-powered insights. Certifications in IBM AI Engineering, Cloudera Data Science, or Google/AWS AI. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities What you’ll do: As a Data Scientist – Artificial Intelligence, your responsibilities include: AI & Machine Learning Model Development Developing ML models for predictive analytics, fraud detection, and automation. Working with deep learning (DL) and Natural Language Processing (NLP) models for text and speech processing. Implementing AI-driven anomaly detection for data quality and governance. Big Data & Model Deployment Building and deploying ML models on Cloudera Machine Learning (CML). Utilizing Apache Spark and PySpark for processing large-scale datasets. Working with Kafka and Iceberg to integrate AI solutions into real-time data pipelines. Data Quality & Governance Supporting AI-powered data quality monitoring with Talend DQ. Assisting in metadata management, data lineage tracking, and automated data validation. Utilizing Denodo for AI-driven data virtualization and federated learning. Security & Compliance Ensuring AI models comply with Bank’s data security and governance policies. Supporting AI-driven encryption and anomaly detection techniques using Thales CipherTrust. Collaboration & Documentation Working with data engineers and analysts to develop AI solutions aligned with business needs. Documenting model architectures, experiment results, and optimization techniques. Assisting in AI-driven reporting and visualization using Qlik Sense/Tableau. Preferred Education Master's Degree Required Technical And Professional Expertise 4-7 years of experience in AI, ML, and Data Science. Strong programming skills in Python, SQL, and ML frameworks (TensorFlow, PyTorch, Scikit-learn). Hands-on experience with big data platforms (Cloudera, Apache Spark, Kafka, Iceberg). Experience with NLP, deep learning, and AI for automation. Understanding of data governance, metadata management, and AI-driven data quality. Familiarity with GitLab, Sonatype Nexus, and CheckMarx for AI model deployment. Preferred Technical And Professional Experience Experience with AI/ML solutions for Banking and financial services. Knowledge of cloud AI platforms (AWS SageMaker, Azure ML, GCP Vertex AI). Exposure to AI ethics, explainable AI (XAI), and bias detection in ML models. Understanding of graph databases (DGraph Enterprise) for AI-powered insights. Certifications in IBM AI Engineering, Cloudera Data Science, or Google/AWS AI. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The ideal candidate must possess strong communication skills, with an ability to listen, comprehend information, and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Senior Analyst Role And Responsibilities Minimum 3+ years of experience in Power BI. Responsible for designing, developing, and implementing business intelligence solutions using Power BI Connecting to and integrating data from various sources, including databases, spreadsheets, and cloud services Designing and creating data models, dashboards, reports, and other data visualizations Enhancing existing Power BI solutions to meet evolving business requirements Design data models to transform raw data into meaningful insights Create dashboards and interactive visual reports using Power BI Create Measures and calculated columns using DAX in Power BI SQL querying for best results, use of filters and graphs for better understanding of data Strong proficiency in SQL with experience in writing complex queries, stored procedures, views, and performance tuning Solid understanding of data warehousing concepts and dimensional modeling Developing and implementing Row Level security Create documentation and user guides to support end-users in navigating and utilizing Power BI reports and dashboards Excellent communication, analytical, and problem-solving skills Experienced with ETL tools such as SSIS, Informatica, or Talend Python programming with experience in connecting to APIs and extracting data Stay current with industry trends and best practices in business intelligence, data visualization, Python programming, and API integration Collaborate with business stakeholders to understand their needs and translate them into technical requirements for API integrations and data visualization Utilize Python scripts to connect to external APIs and extract data for analysis and visualization within Power BI Design, develop, test, and deploy Power BI scripts and perform detailed analytics Experience working in an Agile environment is a plus Technical And Functional Skills Bachelor Degree in Computer Science. Strong in communication skills Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description About the role: As a Senior Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business. In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through IT data analytics techniques such as pattern recognition, AI/ML, and data modelling, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer Junior, both located in India. This role will align to the Data & Analytics chapter of the ICC. This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead. How you will contribute: Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Minimum Requirements/Qualifications: Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS/ Azure preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Azure knowledge on services like ADF, ADLS, etc. Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Preferred requirements: Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS/Azure Certified Data Engineer EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities Who you are: A senior Data Scientist specializing in Advanced Analytics, with expertise in machine learning (ML), predictive modeling, and statistical analysis. Sound experience in leveraging Big-data technologies, AI, and automation to solve complex business problems and enhance decision-making. Have experience working with Cloudera Data Platform, Apache Spark, Kafka, and Iceberg tables, and you understand how to design and deploy scalable AI/ML models. Your role will be instrumental in data modernization efforts, applying AI-driven insights to enhance efficiency, optimize operations, and mitigate risks.What you’ll do: As a Data Scientist – Advanced Analytics, your responsibilities include: AI & Machine Learning Model Development Developing AI/ML models for predictive analytics, fraud detection, and customer segmentation. Implementing time-series forecasting, anomaly detection, and optimization models. Working with deep learning (DL) and Natural Language Processing (NLP) for AI-driven automation. Big Data & Scalable AI Pipelines Processing and analyzing large datasets using Apache Spark, PySpark, and Iceberg tables. Deploying real-time models and streaming analytics with Kafka. Supporting AI model training and deployment on Cloudera Machine Learning (CML). Advanced Analytics & Business Impact Performing exploratory data analysis (EDA) and statistical modelling. Delivering AI-driven insights to improve business decision-making. Supporting data quality and governance initiatives using Talend DQ. Data Governance & Security Ensuring AI models comply with Bank’s data governance and security policies. Implementing AI-driven anomaly detection and metadata management. Utilizing Thales CipherTrust for data encryption and compliance. Collaboration & Thought Leadership Working closely with data engineers, analysts, and business teams to integrate AI-driven solutions. Presenting AI insights and recommendations to stakeholders and leadership teams. Contributing to the development of best practices for AI and analytics. Preferred Education Master's Degree Required Technical And Professional Expertise 3-7 years of experience in AI, ML, and Advanced Analytics. Proficiency in Python, R, SQL, and ML frameworks (Scikit-learn, TensorFlow, PyTorch). Hands-on experience with Big-data technologies (Cloudera, Apache Spark, Kafka, Iceberg table format). Strong knowledge of statistical modelling, optimization, and feature engineering. Understanding of MLOps practices and AI model deployment. Preferred Technical And Professional Experience Develop and implement advanced analytics models, including predictive, prescriptive, and diagnostic analytics to solve business challenges and optimize decision-making processes. Utilize tools and technologies to work with Large and complex datasets to derive analytical solutions. Build and deploy machine learning models (supervised and unsupervised), statistical models, and data-driven algorithms for forecasting, segmentation, classification, and anomaly detection. Should have strong hands-on experience in Python, Spark and cloud computing. Should be independently working and be able to deploy deep learning models using various architectures. Should be able to perform exploratory data analysis (EDA) to uncover trends, relationships, and outliers in large, complex datasets. Design and create features that improve model accuracy and business relevance. Should create insightful visualizations and dashboards that communicate findings to stakeholders. Effectively translate complex data insights into clear and actionable recommendations. Work closely with business leaders, engineers, and analysts to understand business requirements and translate them into analytical solutions that address strategic goals. Exposure to Graph AI using DGraph Enterprise. Knowledge of cloud-based AI platforms (AWS SageMaker, Azure ML, GCP Vertex AI). Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities Who you are: A senior Data Scientist specializing in Advanced Analytics, with expertise in machine learning (ML), predictive modeling, and statistical analysis. Sound experience in leveraging Big-data technologies, AI, and automation to solve complex business problems and enhance decision-making. Have experience working with Cloudera Data Platform, Apache Spark, Kafka, and Iceberg tables, and you understand how to design and deploy scalable AI/ML models. Your role will be instrumental in data modernization efforts, applying AI-driven insights to enhance efficiency, optimize operations, and mitigate risks.What you’ll do: As a Data Scientist – Advanced Analytics, your responsibilities include: AI & Machine Learning Model Development Developing AI/ML models for predictive analytics, fraud detection, and customer segmentation. Implementing time-series forecasting, anomaly detection, and optimization models. Working with deep learning (DL) and Natural Language Processing (NLP) for AI-driven automation. Big Data & Scalable AI Pipelines Processing and analyzing large datasets using Apache Spark, PySpark, and Iceberg tables. Deploying real-time models and streaming analytics with Kafka. Supporting AI model training and deployment on Cloudera Machine Learning (CML). Advanced Analytics & Business Impact Performing exploratory data analysis (EDA) and statistical modelling. Delivering AI-driven insights to improve business decision-making. Supporting data quality and governance initiatives using Talend DQ. Data Governance & Security Ensuring AI models comply with Bank’s data governance and security policies. Implementing AI-driven anomaly detection and metadata management. Utilizing Thales CipherTrust for data encryption and compliance. Collaboration & Thought Leadership Working closely with data engineers, analysts, and business teams to integrate AI-driven solutions. Presenting AI insights and recommendations to stakeholders and leadership teams. Contributing to the development of best practices for AI and analytics. Preferred Education Master's Degree Required Technical And Professional Expertise 3-7 years of experience in AI, ML, and Advanced Analytics. Proficiency in Python, R, SQL, and ML frameworks (Scikit-learn, TensorFlow, PyTorch). Hands-on experience with Big-data technologies (Cloudera, Apache Spark, Kafka, Iceberg table format). Strong knowledge of statistical modelling, optimization, and feature engineering. Understanding of MLOps practices and AI model deployment. Preferred Technical And Professional Experience Develop and implement advanced analytics models, including predictive, prescriptive, and diagnostic analytics to solve business challenges and optimize decision-making processes. Utilize tools and technologies to work with Large and complex datasets to derive analytical solutions. Build and deploy machine learning models (supervised and unsupervised), statistical models, and data-driven algorithms for forecasting, segmentation, classification, and anomaly detection. Should have strong hands-on experience in Python, Spark and cloud computing. Should be independently working and be able to deploy deep learning models using various architectures. Should be able to perform exploratory data analysis (EDA) to uncover trends, relationships, and outliers in large, complex datasets. Design and create features that improve model accuracy and business relevance. Should create insightful visualizations and dashboards that communicate findings to stakeholders. Effectively translate complex data insights into clear and actionable recommendations. Work closely with business leaders, engineers, and analysts to understand business requirements and translate them into analytical solutions that address strategic goals. Exposure to Graph AI using DGraph Enterprise. Knowledge of cloud-based AI platforms (AWS SageMaker, Azure ML, GCP Vertex AI). Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description About the role: As a Senior Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business. In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through IT data analytics techniques such as pattern recognition, AI/ML, and data modelling, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer Junior, both located in India. This role will align to the Data & Analytics chapter of the ICC. This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead. How you will contribute: Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Minimum Requirements/Qualifications: Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS/ Azure preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Azure knowledge on services like ADF, ADLS, etc. Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Preferred requirements: Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS/Azure Certified Data Engineer EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description About the role: As a Senior Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business. In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through IT data analytics techniques such as pattern recognition, AI/ML, and data modelling, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer Junior, both located in India. This role will align to the Data & Analytics chapter of the ICC. This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead. How you will contribute: Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Minimum Requirements/Qualifications: Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS/ Azure preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Azure knowledge on services like ADF, ADLS, etc. Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Preferred requirements: Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS/Azure Certified Data Engineer EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

3 - 5 years

3 - 7 Lacs

Chennai

Work from Office

Naukri logo

Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Mandatory Skills: SQL Server. Experience3-5 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 month ago

Apply

5 - 10 years

7 - 12 Lacs

Noida

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft SQL Server Integration Services (SSIS) Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : Graduate Required Summary : As an Application Lead for Packaged Application Development, you will be responsible for designing, building, and configuring applications using Microsoft SQL Server Integration Services (SSIS). Your typical day will involve leading the effort to deliver high-quality applications, acting as the primary point of contact for the project team, and ensuring timely delivery of project milestones. Roles & Responsibilities: - Lead the effort to design, build, and configure applications using Microsoft SQL Server Integration Services (SSIS). - Act as the primary point of contact for the project team, ensuring timely delivery of project milestones. - Collaborate with cross-functional teams to ensure the successful delivery of high-quality applications. - Provide technical guidance and mentorship to junior team members, ensuring their professional growth and development. - Stay updated with the latest advancements in Packaged Application Development, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: - Must To Have Skills:Strong experience in Microsoft SQL Server Integration Services (SSIS). - Good To Have Skills:Experience in other ETL tools like Informatica, Talend, or DataStage. - Experience in designing, building, and configuring applications using Microsoft SQL Server Integration Services (SSIS). - Strong understanding of database concepts and SQL programming. - Experience in performance tuning and optimization of ETL processes. - Experience in working with large datasets and complex data structures. Additional Information: - The candidate should have a minimum of 5 years of experience in Microsoft SQL Server Integration Services (SSIS). - The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. - This position is based at our Bengaluru office. Qualifications Graduate Required

Posted 1 month ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role :Application Developer Project Role Description :Design, build and configure applications to meet business process and application requirements. Must have skills :Databricks Unified Data Analytics Platform Good to have skills :Talend ETL, Apache Spark, PySpark Minimum 5 year(s) of experience is required Educational Qualification :15 years full time education Summary:As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will collaborate with teams, make team decisions, and provide solutions to problems. Engaging with multiple teams, you will contribute to key decisions and provide solutions for your immediate team and across multiple teams. In this role, you will have the opportunity to showcase your creativity and technical expertise in designing and building applications. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Design, build, and configure applications to meet business process and application requirements Collaborate with cross-functional teams to gather and define application requirements Develop and implement software solutions using the Databricks Unified Data Analytics Platform Perform code reviews and ensure adherence to coding standards Troubleshoot and debug applications to identify and resolve issues Optimize application performance and ensure scalability Document technical specifications and user manuals for applications Stay updated with emerging technologies and industry trends Train and mentor junior developers to enhance their technical skills Professional & Technical Skills: Must To Have Skills:Proficiency in Databricks Unified Data Analytics Platform Good To Have Skills:Experience with PySpark, Apache Spark, Talend ETL Strong understanding of statistical analysis and machine learning algorithms Experience with data visualization tools such as Tableau or Power BI Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information: The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform This position is based at our Bengaluru office A 15 years full time education is required Qualifications 15 years full time education

Posted 1 month ago

Apply

7 - 12 years

9 - 14 Lacs

Coimbatore

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : PySpark, Python (Programming Language), Talend ETL Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for utilizing your expertise in Databricks Unified Data Analytics Platform to develop efficient and effective solutions. Your typical day will involve collaborating with the team, analyzing business requirements, designing and implementing applications, and ensuring the applications meet the desired functionality and performance standards. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Design, build, and configure applications based on business process and application requirements. Analyze business requirements and translate them into technical specifications. Collaborate with cross-functional teams to ensure the successful implementation of applications. Perform code reviews and provide guidance to junior developers. Stay updated with the latest industry trends and technologies to continuously improve application development processes. Professional & Technical Skills: Must To Have Skills:Proficiency in Databricks Unified Data Analytics Platform, Python (Programming Language), Talend ETL, PySpark. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Pune office. A 15 years full-time education is required. Qualifications 15 years full time education

Posted 1 month ago

Apply

8 - 12 years

17 - 22 Lacs

Mumbai, Hyderabad

Work from Office

Naukri logo

Principal Data Scientist - NAV02CM Company Worley Primary Location IND-MM-Navi Mumbai Other Locations IND-KR-Bangalore, IND-MM-Mumbai, IND-MM-Pune, IND-TN-Chennai, IND-GJ-Vadodara, IND-AP-Hyderabad, IND-WB-Kolkata Job Digital Platforms & Data Science Schedule Full-time Employment Type Employee Job Level Experienced Job Posting May 8, 2025 Unposting Date Jun 7, 2025 Reporting Manager Title Head of Data Intelligence Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts. We partner with customers to deliver projects and create value over the life of their assets. Were bridging two worlds, moving towards more sustainable energy sources, while helping to provide the energy, chemicals and resources needed now. The Role As a Data Science Leadwith Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. Conceptualise, build and manage AI/ML (more focus on unstructured data) platform by evaluating and selecting best in industry AI/ML tools and frameworks Drive and take ownership for developing cognitive solutions for internal stakeholders & external customers. Conduct research in various areas like Explainable AI, Image Segmentation,3D object detections and Statistical Methods Evaluate not only algorithms & models but also available tools & technologies in the market to maximize organizational spend. Utilize the existing frameworks, standards, patterns to create architectural foundation and services necessary for AI/ML applications that scale from multi-user to enterprise class. Analyse marketplace trends - economical, social, cultural and technological - to identify opportunities and create value propositions. Offer a global perspective in stakeholder discussions and when shaping solutions/recommendations Analyse marketplace trends - economical, social, cultural and technological - to identify opportunities and create value propositions. Offer a global perspective in stakeholder discussions and when shaping solutions/recommendations IT Skills & Experience Thorough understanding of complete AI/ML project life cycle to establish processes & provide guidance & expert support to the team. Expert knowledge of emerging technologies in Deep Learning and Reinforcement Learning Knowledge of MLOps process for efficient management of the AI/ML projects. Must have lead project execution with other data scientists/ engineers for large and complex data sets Understanding of machine learning algorithms, such as k-NN, GBM, Neural Networks Naive Bayes, SVM, and Decision Forests. Experience in the AI/ML components like Jupyter Hub, Zeppelin Notebook, Azure ML studio, Spark ML lib, TensorFlow, Tensor flow,Keras, Py-Torch and Sci-Kit Learn etc Strong knowledge of deep learning with special focus on CNN/R-CNN/LSTM/Encoder/Transformer architecture Hands-on experience with large networks like Inseption-Resnets,ResNeXT-50. Demonstrated capability using RNNS for text, speech data, generative models Working knowledge of NoSQL (GraphX/Neo4J), Document, Columnar and In-Memory database models Working knowledge of ETL tools and techniques, such as Talend,SAP BI Platform/SSIS or MapReduce etc. Experience in building KPI /storytelling dashboards on visualization tools like Tableau/Zoom data People Skills: Professional and open communication to all internal and external interfaces.Ability to communicate clearly and concisely and a flexible mindset to handle a quickly changing cultureStrong analytical skills. Industry Specific Experience: 10 -18 Years of experience of AI/ML project execution and AI/ML research Education Qualifications, Accreditation, Training Master or Doctroate degree Computer Science Engineering/Information Technology /Artificial Intelligence Moving forward together Were committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, theres a path for you here. And theres no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change.

Posted 1 month ago

Apply

8 - 11 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for a skilled Guidewire Manager with a strong background in the insurance domain and extensive knowledge of traditional ETL tools. The ideal candidate will have 8-11 years of experience. ### Roles and Responsibility Lead and manage Guidewire implementation projects, ensuring alignment with business objectives and technical requirements. Oversee the design, development, and maintenance of data warehousing solutions. Collaborate with cross-functional teams to gather and analyze business requirements. Develop and implement ETL processes using tools such as Informatica PowerCenter, SSIS, SAP BODS, and Talend. Ensure data quality, integrity, and security across all data warehousing and ETL processes. Provide technical guidance and mentorship to team members. ### Job Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. Strong background in the insurance domain. Hands-on experience with ETL tools such as Informatica PowerCenter, SSIS, SAP BODS, and Talend. Excellent understanding of data warehousing architecture and best practices. Proven leadership and project management skills. Strong analytical and problem-solving abilities. Experience with Guidewire implementation projects. Knowledge of additional ETL tools and technologies. Certification in relevant ETL tools or data warehousing technologies. Good exposure to any ETL tools. Good to have knowledge about Life insurance. Understanding of Business Intelligence, Data Warehousing, and Data Modelling. Must have led a team size of at least 4 members. Experience in Insurance domain. Prior Client facing skills, Self-motivated and collaborative.

Posted 1 month ago

Apply

8 - 13 years

20 - 30 Lacs

Hyderabad

Work from Office

Naukri logo

Dear candidate, Greetings for the day! Please apply the candidates those who are intersted for ETL test lead position. Job Summary: We are seeking a highly experienced QA professional with over 10 years of experience to join our Quality Assurance team for the data migration project. The ideal candidate will have a strong background in ETL testing, data validation, and migration projects, with expertise in creating test cases and test plans, as well as hands-on experience with data migration to cloud platforms like Snowflake. The role requires leadership capabilities to manage testing efforts, including coordinating with both on-shore and off-shore teams, ensuring seamless collaboration and delivery. Proficiency in ETL tools like Talend (preferred), Informatica PowerCenter, or DataStage is essential, along with a solid understanding of SQL and semi-structured data formats such as JSON and XML. Key Responsibilities: * Develop and implement comprehensive test strategies and plans for data migration projects, ensuring full coverage of functional and non-functional requirements. * Create detailed test cases, test plans, and test scripts for validating data migration processes and transformations. * Conduct thorough data validation and verification testing, leveraging advanced SQL skills to write and execute complex queries for data accuracy, completeness, and consistency. * Utilize ETL tools such as Talend, Informatica PowerCenter, or DataStage to design and execute data integration tests, ensuring successful data transformation and loading into target systems like Snowflake. * Validate semi-structured data formats (JSON, XML), ensuring proper parsing, mapping, and integration within data migration workflows. * Lead testing efforts for data migration to cloud platforms, ensuring seamless data transfer and integrity. * Act as the QA Lead to manage and coordinate testing activities with on-shore and off-shore teams, ensuring alignment, timely communication, and delivery of quality outcomes. * Document and communicate test results, defects, and issues clearly to the development and project teams, ensuring timely resolutions. * Collaborate with cross-functional teams to create and maintain automated testing frameworks for ETL processes, improving testing efficiency and coverage. * Monitor adherence to QA best practices and standards while driving process improvements. * Stay updated on the latest QA tools, technologies, and methodologies to enhance project outcomes. Qualifications: * 10+ years of experience in Quality Assurance, focusing on ETL testing, data validation, and data migration projects. * Proven experience creating detailed test cases, test plans, and test scripts. * Hands-on experience with ETL tools like Talend (preferred), Informatica PowerCenter, or DataStage. * Proficiency in SQL for complex query writing and optimization for data validation and testing. * Experience with cloud data migration projects, specifically working with databases like Snowflake. * Strong understanding of semi-structured data formats like JSON and XML, with hands-on testing experience. * Proven ability to lead QA efforts, manage teams, and coordinate with on-shore and off-shore teams effectively. * Strong analytical and troubleshooting skills for resolving data quality and testing challenges. Preferred Skills: * Experience with automated testing tools and frameworks, particularly for ETL processes. * Knowledge of data governance and data quality best practices. * Familiarity with AWS or other cloud-based ecosystems. * ISTQB or equivalent certification in software testing.

Posted 1 month ago

Apply

3 - 5 years

7 - 17 Lacs

Hyderabad, Pune

Work from Office

Naukri logo

PharmaACE is a growing Global Healthcare Consulting Firm, headquartered in Princeton, New Jersey. Our expert teams of Business Analysts, based across the US, Canada, Europe, and India, provide Analytics and Business Solutions using our worldwide delivery models for a wide range of clients. Our clients include established, multinational BioPharma leaders and innovators, as well as entrepreneurial firms on the cutting edge of science. We have deep expertise in Forecasting, Business Analytics, Competitive Intelligence, Sales Analytics, and the Analytics Centre of Excellence Model. Our wealth of therapeutic area experience cuts across Oncology, Immuno- science, CNS, CV-Met, and Rare Diseases. We support our clients' needs in Primary Care, Specialty Care, and Hospital business units, and we have managed portfolios in the Biologics space, Branded Pharmaceuticals, Generics, APIs, Diagnostics, and Packaging & Delivery Systems. Responsibilities: • Working closely with Business teams/stakeholders across the pharmaceutical value chain and developing reports and dashboards that tell a story”. • Recommending KPIs and helping generate custom analysis and insights. • Propose newer visualization ideas for our customers keeping in mind the audience type. • Designing Tableau dashboards and reports that are self-explanatory. • Keep the user at the “center” while designing the reports and thereby enhancing the user experience • Requirement gathering while working closely with our Global Clients. • Mentor other developers in the team for the Tableau-related technical challenges. • Propagate Tableau best practices within and across the team. • Ability to set up reports which can be maintained with ease and are scalable to other use cases. • Interacting with the AI/ML team and incorporating new ideas into the final deliverables for the client. • Work closely with cross teams like Advanced Analytics and Competitive Intelligence and Forecasting. • Develop and foster client relationships and serve as a point of contact for projects. Qualifications and Areas of Expertise: • Educational Qualification: BE/BTech/MTech/MCA from a reputed institute. • Minimum 3-5 years of experience. • Proficient with tools including Tableau Desktop, Tableau Server, MySQL, MS Excel, and ETL tools (Alteryx, Tableau Prep or Talend). • Knowledge of SQL. • Experience in advanced LOD calcs, custom visualizations, data cleaning and restructuring. • Strong analytical and problem-solving skills with the ability to question facts. • Excellent written and oral communication skills. Nice to have: • A Valid US Business Visa. • Hands-on experience in Tableau, Python and, R. • Hands-on experience on Qlik Sense and Power BI. • Experience with Pharma / Healthcare data

Posted 1 month ago

Apply

5 - 10 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP Native HANA SQL Modeling & Development Good to have skills : SAP BusinessObjects Data Services, Talend ETL Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with teams to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead and mentor junior professionals Conduct regular team meetings to discuss progress and challenges Stay updated on industry trends and technologies to enhance team performance Professional & Technical Skills: Must To Have Skills: Proficiency in SAP Native HANA SQL Modeling & Development Good To Have Skills: Experience with Talend ETL, SAP BusinessObjects Data Services Strong understanding of database management and optimization Expertise in data modeling and schema design Hands-on experience with SAP HANA Studio and SAP HANA Cloud Platform Knowledge of data integration and data warehousing concepts Additional Information: The candidate should have a minimum of 5 years of experience in SAP Native HANA SQL Modeling & Development This position is based at our Pune office A 15 years full-time education is required Qualification 15 years full time education

Posted 1 month ago

Apply

5 - 7 years

0 - 0 Lacs

Kolkata

Work from Office

Naukri logo

Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Required Skills & Qualifications: - A degree (preferably an advanced degree) in Computer Science, Engineering or a related field - Senior developer having 8+ years of hands on development experience in Azure using ASB and ADF: Extensive experience in designing, developing, and maintaining data solutions/pipelines in the Azure ecosystem, including Azure Service Bus, & ADF. - Familiarity with MongoDB and Python is added advantage. Required Skills Azure Data Factory,Azure Service Bus,Azure,Mongodb

Posted 1 month ago

Apply

0 - 10 years

0 Lacs

Gurugram, Haryana

Work from Office

Indeed logo

Job Title: Data Architect Experience: 5–10 Years Job Summary: We are looking for an experienced and highly motivated Data Architect to join our team. The ideal candidate will have a strong background in architecture design, and implementing enterprise data solutions. You will play a critical role in shaping our data infrastructure, ensuring scalability, performance, and security across data platforms. Key Responsibilities: Design and implement scalable data architectures for enterprise applications. Develop and maintain conceptual, logical, and physical data models. Define data governance policies and ensure data integrity and security. Collaborate with stakeholders to identify data requirements and translate them into architectural solutions. Lead the evaluation and selection of database technologies and tools. Oversee data integration, data warehousing, and ETL/ELT processes. Optimize database performance and manage data storage solutions. Ensure alignment of data architecture with business and technology strategies. Required Skills & Qualifications: Bachelor's or Master’s degree in Computer Science, Information Systems, or related field. 5–10 years of experience in data architecture, and database design. Strong knowledge of relational (e.g., SQL Server). Expertise in data warehousing, ETL tools (e.g., Informatica, Talend), and big data platforms (e.g., Hadoop, Spark). Strong understanding of data governance, security, and compliance standards. Experience with cloud data platforms (e.g., AWS Redshift, Azure Synapse, Google BigQuery) is a plus. Excellent communication and stakeholder management skills. Preferred Certifications (optional): AWS Certified Data Analytics – Specialty Google Professional Data Engineer Microsoft Certified: Azure Data Engineer Associate

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu

Work from Office

Indeed logo

Job Description Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. Job Description - Grade Specific The involves leading and managing a team of data engineers, overseeing data engineering projects, ensuring technical excellence, and fostering collaboration with stakeholders. They play a critical role in driving the success of data engineering initiatives and ensuring the delivery of reliable and high-quality data solutions to support the organization's data-driven objectives. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 month ago

Apply

55 years

0 Lacs

Chennai, Tamil Nadu

Remote

Indeed logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your role You act as a contact person for our customers and advise them on data-driven projects. You are responsible for architecture topics and solution scenarios in the areas of Cloud Data Analytics Platform, Data Engineering, Analytics and Reporting. Experience in Cloud and Big Data architecture. Responsibility for designing viable architectures based on Microsoft Azure, AWS, Snowflake, Google (or similar) and implementing analytics. Experience in DevOps, Infrasturcure as a code, DataOps, MLOps. Experience in business development (as well as your support in the proposal process). Data warehousing, data modelling and data integration for enterprise data environments. Experience in design of large scale ETL solutions integrating multiple / heterogeneous systems. Experience in data analysis, modelling (logical and physical data models) and design specific to a data warehouse / Business Intelligence environment (normalized and multi-dimensional modelling). Experience with ETL tools primarily Talend and/or any other Data Integrator tools (Open source / proprietary), extensive experience with SQL and SQL scripting (PL/SQL & SQL query tuning and optimization) for relational databases such as PostgreSQL, Oracle, Microsoft SQL Server and MySQL etc., and on NoSQL like MongoDB and/or document-based databases. Must be detail oriented, highly motivated and work independently with minimal direction. Excellent written, oral and interpersonal communication skills with ability to communicate design solutions to both technical and non-technical audiences. Ideally: Experience in agile methods such as safe, scrum, etc. Ideally: Experience on programming languages like Python, JavaScript, Java/ Scala etc. Your Profile Provides data services for enterprise information strategy solutions - Works with business solutions leaders and teams to collect and translate information requirements into data to develop data-centric solutions. Design and develop modern enterprise data centric solutions (e.g. DWH, Data Lake, Data Lakehouse) Responsible for designing of data governance solutions. What you will love about working here We recognize the significance of flexible work arrangements to provide support . Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion.

Posted 1 month ago

Apply

3 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Linkedin logo

Role- Snowflake DeveloperExperience- 4-6 yrsLocation- Gandhinagar, Bangalore, Pune, Noida ,Must-Have**3+ Years Handson on Snowflake development & features like Time Travel and Zero-Copy CloningExpertise in building data pipelines and CI/CD pipelines (tools and processes)Should have a strong grasp of SQL, knowledge on Python scripting and cloud-based technologiesHands on Experience in any ETL (Extract, Transform, Load) tools Informatica, Talend etc.Good understanding of Data Modeling and Data Lake conceptsShould be well-versed in Snowflake's unique features, like handling of semi-structured data and its approach to data sharing and security.Ability to deal with both On-premises and Cloud systems.Having good communication skills, Analytical thinking & Problem-solving capabilitiesTeam player with ability to work in a dynamic environment.Certifications in snowflake and cloud technologies will be preferred. SNResponsibility of / Expectations from the Role Design and implement scalable and efficient data storage solutions using Snowflake.Write, optimize, and troubleshoot SQL queries within the Snowflake environment.Integrate Snowflake with various data sources and third-party tools.Ensure data security and compliance with industry standards.Create authorization frameworks for better access controlSolve performance issues and scalability issues in the systemPlanning and development of data pipelines and data integrations involving multiple sources.Develop& Maintain ETL Pipelines using snowflake featuresCreate views, stored procedures & UDF’s for reusable business logicHaving experience in cloud technologies like ASW , GCP integration with snowlake

Posted 1 month ago

Apply

4 - 8 years

18 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Explore an Exciting Career at Accenture Do you believe in creating an impact? Are you a problem solver who enjoys working on transformative strategies for global clients? Are you passionate about being part of an inclusive, diverse and collaborative culture? Then, this is the right place for you! Welcome to a host of exciting global opportunities in Accenture Technology Strategy & Advisory. The Practice- A Brief Sketch: The Technology Strategy & Advisory Practice is a part of and focuses on the clients' most strategic priorities. We help clients achieve growth and efficiency through innovative R&D transformation, aimed at redefining business models using agile methodologies. As part of this high performing team, you will work on the scaling Data & Analytics"and the data that fuels it all"to power every single person and every single process. You will part of our global team of experts who work on the right scalable solutions and services that help clients achieve your business objectives faster. Business Transformation: Assessment of Data & Analytics potential and development of use cases that can transform business Transforming Businesses: Envisioning and Designing customized, next-generation data and analytics products and services that help clients shift to new business models designed for todays connected landscape of disruptive technologies Formulation of Guiding Principles and Components: Assessing impact to client's technology landscape/ architecture and ensuring formulation of relevant guiding principles and platform components. Product and Frameworks :Evaluate existing data and analytics products and frameworks available and develop options for proposed solutions. Bring your best skills forward to excel in the role: Leverage your knowledge of technology trends across Data & Analytics and how they can be applied to address real world problems and opportunities. Interact with client stakeholders to understand their Data & Analytics problems, priority use-cases, define a problem statement, understand the scope of the engagement, and also drive projects to deliver value to the client Design & guide development of Enterprise-wide Data & Analytics strategy for our clients that includes Data & Analytics Architecture, Data on Cloud, Data Quality, Metadata and Master Data strategy Establish framework for effective Data Governance across multispeed implementations. Define data ownership, standards, policies and associated processes Define a Data & Analytics operating model to manage data across organization . Establish processes around effective data management ensuring Data Quality & Governance standards as well as roles for Data Stewards Benchmark against global research benchmarks and leading industry peers to understand current & recommend Data & Analytics solutions Conduct discovery workshops and design sessions to elicit Data & Analytics opportunities and client pain areas. Develop and Drive Data Capability Maturity Assessment, Data & Analytics Operating Model & Data Governance exercises for clients Collaborate with business experts for business understanding, working with other consultants and platform engineers for solutions and with technology teams for prototyping and client implementations. Create expert content and use advanced presentation, public speaking, content creation and communication skills for C-Level discussions. Demonstrate strong understanding of a specific industry , client or technology and function as an expert to advise senior leadership. Manage budgeting and forecasting activities and build financial proposals Qualifications Your experience counts! MBA from a tier 1 institute 3+ years of Strategy Consulting experience at a consulting firm Experience on projects showcasing skills across any two of these capabilities- Data Capability Maturity Assessment, Data & Analytics Strategy, Data Operating Model & Governance, Data on Cloud Strategy, Data Architecture Strategy Desirable to have skills in any two of these domains - Data Quality, Master Data (MDM), Metadata, data lineage, data catalog. Experience one or more technologies in the data governance space is preferred:Collibra, Talend, Informatica, SAP MDG, Stibo, Alteryx, Alation etc. Mandatory knowledge of IT concepts through practical experience and knowledge of technology trends e.g. Mobility, Cloud, Digital, Collaboration A strong understanding in any of the following industries is preferred: Financial Services, Retail, Consumer Goods, Telecommunications, Life Sciences, Transportation, Hospitality, Automotive/Industrial, Mining and Resources or equivalent domains Cloud Data & AI Practitioner Certifications (Azure, AWS, Google) desirable but not essential CDMP Certification from DAMA desirable

Posted 1 month ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Analysis & Interpretation Good to have skills : Snowflake Data Warehouse Minimum 5 year(s) of experience is required Educational Qualification : Minimum 15 years of Full-time education Project Role :Application Developer Project Role Description :Design, build and configure applications to meet business process and application requirements. Must have Skills :Data Analysis & InterpretationGood to Have Skills :Snowflake Data WarehouseJob Requirements :Key Responsibilities :1 Enable himself with the Accenture Standards and Policies of working in a Project and client environment2 Work with Project Manager and Project Lead to get his Client user accounts created3 Lead the overall Snowflake Transformation journey for the customer4 Design Develop the new solution in Snowflake Datawarehouse 5 Prepare and test strategy and an implementation plan for the solution6 Play role of a End to End Data Engineer Technical Experience :1 2 Years of Hands-on Experience in SNOWFLAKE Datawarehouse Design and Development Projects specifically2 4 Years of Hands-on Experience in SQL Programming Language PLSQL3 1 Years of Experience in JavaScripting or any programming languages Python, ReactJS, Angular4 Good Understanding and Concepts of Cloud Datawarehouse and Datawarehousing concepts and Dimensional Modelling concepts5 1 Year Experience in ETL Technologies - Informatica or DataStage or Talend or SAP BODS or Abinitio, etc Professional Attributes :1 Should be fluent in English communication2 Should have handled direct Client Interactions in the past3 Should be clear in Written Communications4 Should be having strong interpersonal skills5 Should be conscious of European Professional Etiquettes Educational Qualification:Minimum 15 years of Full-time educationAdditional Info :Exposure to AWS and Amazon S3 and other Amazon Cloud Hosting Products related to Analytics or DBs Qualifications Minimum 15 years of Full-time education

Posted 1 month ago

Apply

Exploring Talend Jobs in India

Talend is a popular data integration and management tool used by many organizations in India. As a result, there is a growing demand for professionals with expertise in Talend across various industries. Job seekers looking to explore opportunities in this field can expect a promising job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities have a high concentration of IT companies and organizations that frequently hire for Talend roles.

Average Salary Range

The average salary range for Talend professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

A typical career progression in the field of Talend may follow this path: - Junior Developer - Developer - Senior Developer - Tech Lead - Architect

As professionals gain experience and expertise, they can move up the ladder to more senior and leadership roles.

Related Skills

In addition to expertise in Talend, professionals in this field are often expected to have knowledge or experience in the following areas: - Data Warehousing - ETL (Extract, Transform, Load) processes - SQL - Big Data technologies (e.g., Hadoop, Spark)

Interview Questions

  • What is Talend and how does it differ from traditional ETL tools? (basic)
  • Can you explain the difference between tMap and tJoin components in Talend? (medium)
  • How do you handle errors in Talend jobs? (medium)
  • What is the purpose of a context variable in Talend? (basic)
  • Explain the difference between incremental and full loading in Talend. (medium)
  • How do you optimize Talend jobs for better performance? (advanced)
  • What are the different deployment options available in Talend? (medium)
  • How do you schedule Talend jobs to run at specific times? (basic)
  • Can you explain the use of tFilterRow component in Talend? (basic)
  • What is metadata in Talend and how is it used? (medium)
  • How do you handle complex transformations in Talend? (advanced)
  • Explain the concept of schema in Talend. (basic)
  • How do you handle duplicate records in Talend? (medium)
  • What is the purpose of the tLogRow component in Talend? (basic)
  • How do you integrate Talend with other systems or applications? (medium)
  • Explain the use of tNormalize component in Talend. (medium)
  • How do you handle null values in Talend transformations? (basic)
  • What is the role of the tRunJob component in Talend? (medium)
  • How do you monitor and troubleshoot Talend jobs? (medium)
  • Can you explain the difference between tMap and tMapLookup components in Talend? (medium)
  • How do you handle changing business requirements in Talend jobs? (advanced)
  • What are the best practices for version controlling Talend jobs? (advanced)
  • How do you handle large volumes of data in Talend? (medium)
  • Explain the purpose of the tAggregateRow component in Talend. (basic)

Closing Remark

As you explore opportunities in the Talend job market in India, remember to showcase your expertise, skills, and knowledge during the interview process. With preparation and confidence, you can excel in securing a rewarding career in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies