Home
Jobs

244 Data Transformation Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

8 - 13 Lacs

Hyderabad, Gurugram

Work from Office

Naukri logo

Expertise in analysing data and creating innovative dashboards & visualizations using Power BI Experience in conceptualizing, templating, and transforming traditional reports to analytical dashboards as part of the digital transformation process. Experience in data acquisition, performing data transformations, data aggregations using SQL, Python. Expertise in performing in-depth data analysis using Microsoft Excel and its advanced functions Experience providing ad-hoc reports to answer specific business questions from business leaders Experience conducting and delivering experiments and proofs of concept to validate business ideas and their potential value Knowledge of the Python (or R) programming language Familiarity with Microsoft Azure services and tools is a plus Degree in Computer Science, Mathematics, Statistics, or other related technical fields with equivalent practical experience. Strong communications skills

Posted -1 days ago

Apply

1.0 - 5.0 years

2 - 6 Lacs

Nagercoil

Work from Office

Naukri logo

Job Summary: We are seeking a skilled Data Migration Specialist to support critical data transition initiatives, particularly involving Salesforce and Microsoft SQL Server . This role will be responsible for the end-to-end migration of data between systems, including data extraction, transformation, cleansing, loading, and validation. The ideal candidate will have a strong foundation in relational databases, a deep understanding of the Salesforce data model, and proven experience handling large-volume data loads. Required Skills and Qualifications: 1+ years of experience in data migration , ETL , or database development roles. Strong hands-on experience with Microsoft SQL Server and T-SQL (complex queries, joins, indexing, and profiling). Proven experience using Salesforce Data Loader for bulk data operations. Solid understanding of Salesforce CRM architecture , including object relationships and schema design. Strong background in data transformation and cleansing techniques . Nice to Have: Experience with large-scale data migration projects involving CRM or ERP systems. Exposure to ETL tools such as Talend, Informatica, Mulesoft, or custom scripts. Salesforce certifications (e.g., Administrator , Data Architecture & Management Designer ) are a plus. Knowledge of Apex , Salesforce Flows , or other declarative tools is a bonus. Roles and Responsibilities Key Responsibilities: Execute end-to-end data migration activities , including data extraction, transformation, and loading (ETL). Develop and optimize complex SQL queries, joins, and stored procedures for data profiling, analysis, and validation. Utilize Salesforce Data Loader and/or Apex DataLoader CLI to manage high-volume data imports and exports. Understand and work with the Salesforce data model , including standard/custom objects and relationships (Lookup, Master-Detail). Perform data cleansing, de-duplication , and transformation to ensure quality and consistency. Troubleshoot and resolve data-related issues , load failures, and anomalies. Collaborate with cross-functional teams to gather data mapping requirements and ensure accurate system integration. Ensure data integrity , adherence to compliance standards, and document migration processes and mappings. Ability to independently analyze, troubleshoot, and resolve data-related issues effectively. Follow best practices for data security, performance tuning, and migration efficiency.

Posted -1 days ago

Apply

0.0 - 2.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Essential Duties Responsibilities: o Perform basic descriptive data analysis o Implement and assess different ways of data transformation o Formulate business problems into math/statistics/machine-learning problems o Build math/statistics/machine-learning models in a scientific way o Carry out hypothesis tests in a scientific way o Do model selection and model diagnostics o Actively participate in research meetings o Support the research team by creating desired datasets Qualifications: o Good communication skills o Excellent analytical and data science skills with strong attention to detail o Good Python programmers with good knowledge of pandas/numpy/sklearn o Graduate level math and statistics knowledge is preferred o Knowledge in machine learning, NLP knowledge is a plus o Ability to communicate complex information clearly o Ability to work independently o Team player and open to new ideas

Posted Just now

Apply

15.0 - 20.0 years

17 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Business Process Architect Project Role Description : Analyze and design new business processes to create the documentation that guides the implementation of new processes and technologies. Partner with the business to define product requirements and use cases to meet process and functional requirements. Participate in user and task analysis to represent business needs. Must have skills : SAP CPI for Data Services Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Business Architect, you will define opportunities to create tangible business value for the client by leading current state assessments and identifying high-level customer requirements. Your typical day will involve collaborating with various stakeholders to understand their needs, analyzing existing processes, and designing innovative solutions that align with the client's strategic goals. You will also be responsible for developing comprehensive business cases that outline the necessary steps to achieve the envisioned outcomes, ensuring that all proposed solutions are practical and beneficial for the organization.Key Responsibilities:1 Design Build and configure IBP CI-DS and RTI applications to meet business process and application requirements 2 Play the role of stream lead for individual IBP integration module processes 3 Should be able to drive discussion with the client and conduct workshops 4 Should be able to drive IBP project deliverables and liaison with other teams 5 Effectively communicates with internal and external stakeholders6. Design business processes, including characteristics and key performance indicators (KPIs), to meet process and functional requirements.7. Collaborate with cross-functional teams to create the process blueprint and establish business process requirements to drive out application requirements and metrics.8. Assist in quality management reviews, ensuring all business and design requirements are met.9. Educate stakeholders to ensure a complete understanding of the designs.Functional Expertise:1. Must To Have Skills: Proficiency in SAP CPI for Data Services.2. Good To Have Skills: Knowledge in SAP IBP functional modules3. Strong understanding of integration patterns and data transformation techniques.4. Experience with process mapping and business process modeling.5. Ability to communicate complex concepts clearly to diverse audiences.6. Familiarity with project management methodologies and tools. Additional Information:1. The candidate should have minimum 3.5 years of experience in SAP CPI for Data Services.2. A 15 years full time education is required. Qualification 15 years full time education

Posted Just now

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Snowflake Data Warehouse, Functional Testing Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Key Responsibilities:a Overall 12+ of data experience including 5+ years on Snowflake and 3+ years on DBT (Core and Cloud)b Played a key role in DBT related discussions with teams and clients to understand business problems and solutioning requirementsc As a DBT SME liaise with clients on business/ technology/ data transformation programs; orchestrate the implementation of planned initiatives and realization of business outcomes d Spearhead team to translate business goals/ challenges into practical data transformation and technology roadmaps and data architecture designs e Strong experience in designing, architecting and managing(admin) Snowflake solutions and deploying data analytics solutions in Snowflake.f Strong inclination for practice building that includes spearheading thought leadership discussions, managing team activities. Technical Experience:a Strong Experience working as a Snowflake on Cloud DBT Data Architect with thorough knowledge of different servicesb Ability to architect solutions from OnPrem to cloud and create end to end data pipelines using DBT c Excellent process knowledge in one or more of the following areas:Finance, Healthcare, Customer Experienced Experience in working on Client Proposals (RFP's), Estimation, POCs, POVs on new Snowflake featurese DBT (Core and Cloud) end to end migration experience that includes DBT migration - Refactoring SQL for modularity, DBT modeling experience (.sql or .py files credbt job scheduling on atleast 2 projectsf Knowledge of Jinja template language (Macros) would be added advantageg Knowledge of Special features like DBT documentation, semantic layers creation, webhooks etc.h DBT and cloud certification is important.i Develop, fine-tune, and integrate LLM models (OpenAI, Anthropic, Mistral, etc.) into enterprise workflows via Cortex AI.j Deploy AI Agents capable of reasoning, tool use, chaining, and task orchestration for knowledge retrieval and decision support.k Guide the creation and management of GenAI assets like prompts, embeddings, semantic indexes, agents, and custom bots.l Collaborate with data engineers, ML engineers, and leadership team to translate business use cases into GenAI-driven solutions.m Provide mentorship and technical leadership to a small team of engineers working on GenAI initiatives.n Stay current with advancements in Snowflake, LLMs, and generative AI frameworks to continuously enhance solution capabilities.o Should have good understanding of SQL, Python. Also, the architectural concepts of Snowflake should be clear. Professional Attributes:a client management, stakeholder management, collaboration, interpersonal and relationship building skillsb Ability to create innovative solutions for key business challengesc Eagerness to learn and develop self on an ongoing basisd Structured communication written, verbal and presentational. Educational Qualification:a MBA Technology/Data related specializations/ MCA/ Advanced Degrees in STEM Qualification 15 years full time education

Posted Just now

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Snowflake Data Warehouse, Manual Testing Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Key Responsibilities:a Overall 12+ of data experience including 5+ years on Snowflake and 3+ years on DBT (Core and Cloud)b Played a key role in DBT related discussions with teams and clients to understand business problems and solutioning requirementsc As a DBT SME liaise with clients on business/ technology/ data transformation programs; orchestrate the implementation of planned initiatives and realization of business outcomes d Spearhead team to translate business goals/ challenges into practical data transformation and technology roadmaps and data architecture designs e Strong experience in designing, architecting and managing(admin) Snowflake solutions and deploying data analytics solutions in Snowflake.f Strong inclination for practice building that includes spearheading thought leadership discussions, managing team activities. Technical Experience:a Strong Experience working as a Snowflake on Cloud DBT Data Architect with thorough knowledge of different servicesb Ability to architect solutions from OnPrem to cloud and create end to end data pipelines using DBT c Excellent process knowledge in one or more of the following areas:Finance, Healthcare, Customer Experienced Experience in working on Client Proposals (RFP's), Estimation, POCs, POVs on new Snowflake featurese DBT (Core and Cloud) end to end migration experience that includes DBT migration - Refactoring SQL for modularity, DBT modeling experience (.sql or .py files credbt job scheduling on atleast 2 projectsf Knowledge of Jinja template language (Macros) would be added advantageg Knowledge of Special features like DBT documentation, semantic layers creation, webhooks etc.h DBT and cloud certification is important.i Develop, fine-tune, and integrate LLM models (OpenAI, Anthropic, Mistral, etc.) into enterprise workflows via Cortex AI.j Deploy AI Agents capable of reasoning, tool use, chaining, and task orchestration for knowledge retrieval and decision support.k Guide the creation and management of GenAI assets like prompts, embeddings, semantic indexes, agents, and custom bots.l Collaborate with data engineers, ML engineers, and leadership team to translate business use cases into GenAI-driven solutions.m Provide mentorship and technical leadership to a small team of engineers working on GenAI initiatives.n Stay current with advancements in Snowflake, LLMs, and generative AI frameworks to continuously enhance solution capabilities.o Should have good understanding of SQL, Python. Also, the architectural concepts of Snowflake should be clear. Professional Attributes:a client management, stakeholder management, collaboration, interpersonal and relationship building skillsb Ability to create innovative solutions for key business challengesc Eagerness to learn and develop self on an ongoing basisd Structured communication written, verbal and presentational. Educational Qualification:a MBA Technology/Data related specializations/ MCA/ Advanced Degrees in STEM Qualification 15 years full time education

Posted Just now

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Snowflake Data Warehouse Good to have skills : Data EngineeringMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives.Key Responsibilities:a Overall 12+ of data experience including 5+ years on Snowflake and 3+ years on DBT (Core and Cloud)b Played a key role in DBT related discussions with teams and clients to understand business problems and solutioning requirementsc As a DBT SME liaise with clients on business/ technology/ data transformation programs; orchestrate the implementation of planned initiatives and realization of business outcomes d Spearhead team to translate business goals/ challenges into practical data transformation and technology roadmaps and data architecture designs e Strong experience in designing, architecting and managing(admin) Snowflake solutions and deploying data analytics solutions in Snowflake.f Strong inclination for practice building that includes spearheading thought leadership discussions, managing team activities. Technical Experience:a Strong Experience working as a Snowflake on Cloud DBT Data Architect with thorough knowledge of different servicesb Ability to architect solutions from OnPrem to cloud and create end to end data pipelines using DBT c Excellent process knowledge in one or more of the following areas:Finance, Healthcare, Customer Experienced Experience in working on Client Proposals (RFP's), Estimation, POCs, POVs on new Snowflake featurese DBT (Core and Cloud) end to end migration experience that includes DBT migration - Refactoring SQL for modularity, DBT modeling experience (.sql or .py files credbt job scheduling on at least 2 projectsf Knowledge of Jinja template language (Macros) would be added advantageg Knowledge of Special features like DBT documentation, semantic layers creation, webhooks etc.h DBT and cloud certification is important.i Develop, fine-tune, and integrate LLM models (OpenAI, Anthropic, Mistral, etc.) into enterprise workflows via Cortex AI.j Deploy AI Agents capable of reasoning, tool use, chaining, and task orchestration for knowledge retrieval and decision support.k Guide the creation and management of GenAI assets like prompts, embeddings, semantic indexes, agents, and custom bots.l Collaborate with data engineers, ML engineers, and leadership team to translate business use cases into GenAI-driven solutions.m Provide mentorship and technical leadership to a small team of engineers working on GenAI initiatives.n Stay current with advancements in Snowflake, LLMs, and generative AI frameworks to continuously enhance solution capabilities.o Should have good understanding of SQL, Python. Also, the architectural concepts of Snowflake should be clear. Professional Attributes:a client management, stakeholder management, collaboration, interpersonal and relationship building skillsb Ability to create innovative solutions for key business challengesc Eagerness to learn and develop self on an ongoing basis Educational Qualification:a MBA Technology/Data related specializations/ MCA/ Advanced Degrees in STEM Qualification 15 years full time education

Posted Just now

Apply

8.0 - 13.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make optimal use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective processes. We are looking for Sr. AWS Cloud Architect Architect and Design Develop scalable and efficient data solutions using AWS services such as AWS Glue, Amazon Redshift, S3, Kinesis(Apache Kafka), DynamoDB, Lambda, AWS Glue(Streaming ETL) and EMR Integration Integrate real-time data from various Siemens organizations into our data lake, ensuring seamless data flow and processing. Data Lake Management Design and manage a large-scale data lake using AWS services like S3, Glue, and Lake Formation. Data Transformation Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Snowflake Integration Implement and manage data pipelines to load data into Snowflake, utilizing Iceberg tables for optimal performance and flexibility. Performance Optimization Optimize data processing pipelines for performance, scalability, and cost-efficiency. Security and Compliance Ensure that all solutions adhere to security best practices and compliance requirements. Collaboration Work closely with cross-functional teams, including data engineers, data scientists, and application developers, to deliver end-to-end solutions. Monitoring and Troubleshooting Implement monitoring solutions to ensure the reliability and performance of data pipelines. Troubleshoot and resolve any issues that arise. Youd describe yourself as: Experience 8+ years of experience in data engineering or cloud solutioning, with a focus on AWS services. Technical Skills Proficiency in AWS services such as AWS API, AWS Glue, Amazon Redshift, S3, Apache Kafka and Lake Formation. Experience with real-time data processing and streaming architectures. Big Data Querying Tools: Strong knowledge of big data querying tools (e.g., Hive, PySpark). Programming Strong programming skills in languages such as Python, Java, or Scala for building and maintaining scalable systems. Problem-Solving Excellent problem-solving skills and the ability to troubleshoot complex issues. Communication Strong communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Certifications AWS certifications are a plus. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. Find out more about Siemens careers at

Posted Just now

Apply

15.0 - 20.0 years

17 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Python (Programming Language) Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Python, you will be responsible for building and designing scalable and open Business Intelligence (BI) architecture to provide cross-enterprise visibility and agility for business innovation. You will create industry and function data models used to build reports and dashboards, ensuring seamless integration with Accentures Data and AI framework to meet client needs. Roles & Responsibilities1.Data Engineer to lead or drive the migration of legacy SAS data preparation jobs to a modern Python-based data engineering framework. 2. Should have deep experience in both SAS and Python, strong knowledge of data transformation workflows, and a solid understanding of database systems and ETL best practices.3.Should analyze existing SAS data preparation and data feed scripts and workflows to identify logic and dependencies.4.Should translate and re-engineer SAS jobs into scalable, efficient Python-based data pipelines.5.Collaborate with data analysts, scientists, and engineers to validate and test converted workflows.6.Optimize performance of new Python workflows and ensure data quality and consistency.7.Document migration processes, coding standards, and pipeline configurations.8.Integrate new pipelines with google cloud platforms as required.9.Provide guidance and support for testing, validation, and production deployment Professional & Technical Skills: - Must To Have Skills: Proficiency in SAS Base & Macros- Strong understanding of statistical analysis and machine learning algorithms- Experience with data visualization tools such as Tableau or Power BI- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Additional Information:- The candidate should have 8+ years of exp with min 3 years of exp in SAS or python Data engineering Qualification 15 years full time education

Posted 1 hour ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Electronic Data Interchange (EDI) Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : BTech or MTech or MCA with IT or CSE or EEE or ECE15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are functioning optimally. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Should have experience in integration/B2B solutioning/designing, configuring, developing, testing and deploying using SEEBURGER BIS Suite Products.- Should have experience System Admin activities such as System Monitoring and Maintenance, Keystore Manager, User Management and BIS Landscape Manager Activities.- Must have strong experience in EDI X12 functional knowledge, EDI requirements, Partner liaison skills, Mapping analysis, BIC MD Mapping, setup and communications such as AS2, HTTPS, SFTP, HTTPs.- Must have strong experience on EDI X12, Flat file structures, OpenInvoice and nice to have EDIFACT.- Should have good experience on various Seeburger Suite products like BIS/BIC mapper, MFT, Process designer, BIS Landscape manger and BIS front-end etc.- Nice to have Seeburger Cloud Migration Experience (onpremise to cloud). Professional & Technical Skills: - Must have experience Seeburger BIS integration architecture/design, Platform Architecture and Administration- Must have experience in developing BIS MD mappings/data transformations, MFT and front-end setup- Must have strong EDI/EDIFACT skill experience- Good to experience have Logical thinking and Problem-solving skills along with an ability to collaborate Additional Information:- The candidate should have minimum 7.5 years of experience in Electronic Data Interchange (EDI).- Should have strong communication skills and presentation skills- Collaborating effectively with team members- This position is based at our Hyderabad office.- A BTech or MTech or MCA with IT or CSE or EEE or ECE is required. Qualification BTech or MTech or MCA with IT or CSE or EEE or ECE15 years full time education

Posted 1 hour ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Oracle Integration Cloud Service (ICS) Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications function seamlessly to support organizational goals. You will also participate in testing and refining applications to enhance user experience and efficiency, while maintaining a focus on quality and performance throughout the development lifecycle. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows to ensure clarity and consistency.- Engage in continuous learning to stay updated with the latest technologies and best practices in application development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Oracle Integration Cloud Service (ICS).- Strong understanding of application development methodologies and frameworks.- Experience with integration patterns and data transformation techniques.- Familiarity with cloud-based application deployment and management.- Ability to troubleshoot and resolve application issues effectively. Additional Information:- The candidate should have minimum 3 years of experience in Oracle Integration Cloud Service (ICS).- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 hour ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Responsibilities: Data Quality Implementation & Monitoring (Acceldata & Demand Tools): Design, develop, and implement data quality rules and checks using Acceldata to monitor data accuracy, completeness, consistency, and timeliness. Configure and utilize Acceldata to profile data, identify anomalies, and establish data quality thresholds. Investigate and resolve data quality issues identified by Acceldata, working with relevant teams for remediation. Leverage DemandTools within our Salesforce environment to identify, merge, and prevent duplicate records across Leads, Contacts, and Accounts. Implement data standardization and cleansing processes within Salesforce using DemandTools. Develop and maintain data quality dashboards and reports using Acceldata to provide visibility into data health. Data Onboarding & Integration Quality: Collaborate with engineers and platform teams to understand data sources and pipelines built using Fivetran / ingestion tool. Ensure data transformations within Fivetran to maintain data integrity and quality. Develop and execute test plans and test cases to validate the successful and accurate onboarding of data into our snowflake environment. Metadata Management & Data Governance: Work with the Atlan platform to understand and contribute to the establishment of a comprehensive data catalog. Assist in defining and implementing data governance policies and standards within Atlan. Validate the accuracy and completeness of metadata within Atlan to ensure data discoverability and understanding. Collaborate on data lineage tracking and impact analysis using Atlan. Collaboration & Communication: Work closely with data engineers, platform team, data analysts, business stakeholders, and Salesforce administrators. Clearly communicate data quality findings, risks, and remediation steps. Participate in data governance meetings and contribute to the development of data quality best practices. Document data quality rules, processes, and monitoring procedures. Required Skills & Experience: Proven experience (e.g., 3+ years) as a Data Quality Engineer or similar role. Hands-on experience with Fivetran / data ingestion application for data integration and understanding its data transformation capabilities. Familiarity with Atlan or other modern data catalog and metadata management tools. Strong practical experience with Acceldata or similar data quality monitoring and observability platforms. Familiarity in using DemandTools for data quality management within Salesforce. Solid understanding of data quality principles, methodologies, and best practices. Strong SQL skills for data querying and analysis. Experience with data profiling and data analysis techniques. Excellent analytical, problem-solving, and troubleshooting skills. Strong communication and collaboration skills. Ability to work independently and manage tasks effectively in a remote environment. Preferred Skills & Experience: Experience with other data quality tools or frameworks. Knowledge of data warehousing concepts and technologies (e.g., Snowflake, BigQuery). Experience with scripting languages like Python for data manipulation and automation. Familiarity with Salesforce data model and administration.

Posted 1 hour ago

Apply

6.0 - 9.0 years

18 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Minimum of 6-9 years of development experience in Pega, Pega CSSA certification is mandatory. Must have strong hands-on implementation experience leveraging Enterprise class Structure, Data Modelling, Application structure design, specialization & extensibility, Inheritance and Rule Resolution Concepts. Must have excellent implementation experience of integrations (SOAP, REST, File Listener, etc.) and its exception handling. Must have strong development experience in Data Pages, Reports, Activities, Data Transforms, Declarative & Decision rules, Functions, Function Alias & libraries and Correspondence features. Should have good hands-on experience in Case Management, User Interface, Authentication & Authorization concepts. Should have experience in Asynchronous background Processing which includes Agents, Job Schedulers, Queue Processors & SLA's. Should have good knowledge on Ruleset management, Branches, Skimming, Debugging and Deployment process. Should have hands on experience on usage of App studio and knowledge of Admin Studio. Having exposure in debugging & resolving performance issues in Pega (leveraging Admin Studio/SMA, Key Alerts & Exceptions, PDC/AES) is added advantage Working Knowledge on one of either of Customer Service, SI, SD, CLM/KYC or healthcare solutions is added advantage. Involved in any Pega application Upgrade/Modernization implementation experience is added advantage. Should have good knowledge of latest Pega Features on v8.X & Infinity 23. Strong communication, presentation skills and familiarity with Agile Methodologies and Practices.

Posted 1 hour ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

About the Role We are seeking a highly skilled Data Engineer with deep expertise in PySpark and the Cloudera Data Platform (CDP) to join our data engineering team. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines that ensure high data quality and availability across the organization. This role requires a strong background in big data ecosystems, cloud-native tools, and advanced data processing techniques. The ideal candidate has hands-on experience with data ingestion, transformation, and optimization on the Cloudera Data Platform, along with a proven track record of implementing data engineering best practices. You will work closely with other data engineers to build solutions that drive impactful business insights. Responsibilities Data Pipeline DevelopmentDesign, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data IngestionImplement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and ProcessingUse PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance OptimizationConduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and ValidationImplement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and OrchestrationAutomate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Education and Experience Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills PySparkAdvanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data PlatformStrong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data WarehousingKnowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data TechnologiesFamiliarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and SchedulingExperience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and AutomationStrong scripting skills in Linux.

Posted 1 hour ago

Apply

8.0 - 13.0 years

8 - 12 Lacs

Pune

Work from Office

Naukri logo

Must have 5+ years of experience in data engineer role Strong background in Relational Databases ( Microsoft SQL) and strong ETL (Microsoft SSIS) experience. Strong hand on T-SQL programming language Ability to develop reports using Microsoft Reporting Services (SSRS) Familiarity with C# is preferred Strong Analytical and Logical Reasoning skills Should be able to build processes that support data transformation, workload management, data structures, dependency and metadata Should be able to develop data models to answer questions for the business users Should be good at performing root cause analysis on internal/external data and processes to answer specific business data questions. Excellent communication skills to work with business users independently.

Posted 1 hour ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code across multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of software solutions, while also performing maintenance and enhancements to existing applications. You will be responsible for delivering high-quality code and contributing to the overall success of the projects you are involved in, ensuring that all components function seamlessly together. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct code reviews to ensure adherence to best practices and coding standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data processing frameworks and distributed computing.- Experience with data transformation and ETL processes.- Familiarity with cloud platforms and services related to data processing.- Ability to troubleshoot and optimize performance issues in data pipelines. Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 hour ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Integration DevelopmentDesign and implement integration solutions using MuleSoft Anypoint Platform for various enterprise applications, including ERP, CRM, and third-party systems. API ManagementDevelop and manage APIs using MuleSofts API Gateway, ensuring best practices for API design, security, and monitoring. MuleSoft Anypoint StudioDevelop, deploy, and monitor MuleSoft applications using Anypoint Studio and Anypoint Management Console. Data TransformationUse MuleSofts DataWeave to transform data between various formats (XML, JSON, CSV, etc.) as part of integration solutions. Troubleshooting and DebuggingProvide support in troubleshooting and resolving integration issues and ensure the solutions are robust and scalable. CollaborationWork closely with other developers, business analysts, and stakeholders to gather requirements, design, and implement integration solutions. DocumentationCreate and maintain technical documentation for the integration solutions, including API specifications, integration architecture, and deployment processes. Best PracticesEnsure that the integrations follow industry best practices and MuleSofts guidelines for designing and implementing scalable and secure solutions. Required Qualifications: Bachelors degree in computer science, Information Technology, or a related field. 3+ years of experience in MuleSoft development and integration projects. Proficiency in MuleSoft Anypoint Platform, including Anypoint Studio, Anypoint Exchange, and Anypoint Management Console. Strong knowledge of API design and management, including REST, SOAP, and Web Services. Proficiency in DataWeave for data transformation. Hands-on experience with integration patterns and technologies such as JMS, HTTP/HTTPS, File, Database, and Cloud integrations. Experience with CI/CD pipelines and deployment tools such as Jenkins, Git, and Maven. Good understanding of cloud platforms (AWS, Azure, or GCP) and how MuleSoft integrates with cloud services. Excellent troubleshooting and problem-solving skills. Strong communication skills and the ability to work effectively in a team environment.

Posted 1 hour ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Navi Mumbai

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code across multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of software solutions, while also performing maintenance and enhancements to existing applications. You will be responsible for delivering high-quality code and contributing to the overall success of the projects you are involved in, ensuring that client requirements are met effectively and efficiently. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct code reviews to ensure adherence to best practices and coding standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data processing frameworks and distributed computing.- Experience with data transformation and ETL processes.- Familiarity with cloud platforms and services related to data processing.- Ability to troubleshoot and optimize performance issues in data pipelines. Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Mumbai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 hour ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code across multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of software solutions, while also performing maintenance and enhancements to existing applications. You will be responsible for delivering high-quality code and contributing to the overall success of the projects you are involved in, ensuring that client requirements are met effectively and efficiently. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct code reviews to ensure adherence to best practices and coding standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data processing frameworks and distributed computing.- Experience with data transformation and ETL processes.- Familiarity with cloud platforms and services related to data processing.- Ability to troubleshoot and optimize performance issues in data pipelines. Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 hour ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

This role will be instrumental in building and maintaining robust, scalable, and reliable data pipelines using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. The ideal candidate will have a strong understanding of data streaming concepts, experience with real-time data processing, and a passion for building high-performance data solutions. This role requires excellent analytical skills, attention to detail, and the ability to work collaboratively in a fast-paced environment. Essential Responsibilities Design & develop data pipelines for real time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations. Automate data pipeline deployment, monitoring, and maintenance tasks. Stay up-to-date with the latest advancements in data streaming technologies and best practices. Contribute to the development of data engineering standards and best practices within the organization. Participate in code reviews and contribute to a collaborative and supportive team environment. Work closely with other architects and tech leads in India & US and create POCs and MVPs Provide regular updates on the tasks, status and risks to project manager The experience we are looking to add to our team Required Bachelors degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry. Hands-on experience with ksqlDB for real-time data transformations and stream processing. Experience with Kafka Connect and building custom connectors. Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform - Excellent problemsolving, analytical, and communication skills. Ability to work independently and as part of a team Good to have Experience in Google Cloud Healthcare industry experience Experience in Agile

Posted 1 hour ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

Design, develop, and maintain ETL processes using Talend. Manage and optimize data pipelines on Amazon Redshift. Implement data transformation workflows using DBT (Data Build Tool). Write efficient, reusable, and reliable code in PySpark. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in data engineering. : Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer or similar role. High proficiency in Talend. Strong experience with Amazon Redshift. Expertise in DBT and PySpark. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud platforms and services. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience with other data engineering tools and frameworks. Knowledge of machine learning frameworks and libraries.

Posted 1 hour ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

Tech stack GCP Data Fusion, BigQuery, Dataproc, SQL/T-SQL, Cloud Run, Secret Manager Git, Ansible Tower / Ansible scripts, Jenkins, Java, Python, Terraform, Cloud Composer/Airflow Experience and Skills Must Have Proven (3+ years) hands on experience in designing, testing, and implementing data ingestion pipelines on GCP Data Fusion, CDAP or similar tools, including ingestion and parsing and wrangling of CSV, JSON, XML etc formatted data from RESTful & SOAP APIs, SFTP servers, etc. Modern world data contract best practices in-depth understanding with proven experience (3+ years) for independently directing, negotiating, and documenting best in class data contracts. Java (2+ years) experience in development, testing and deployment (ideally custom plugins for Data Fusion) Proficiency in working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools, ideally for Cloud based Data solutions. Experience in working in Agile environment and toolset. Strong problem-solving and analytical skills Enthusiastic willingness to learn and develop technical and soft skills as needs require rapidly and independently. Strong organisational and multi-tasking skills. Good team player who embraces teamwork and mutual support. Nice to Have Hands on experience in Cloud Composer/Airflow, Cloud Run, Pub/Sub Hands on development in Python, Terraform Strong SQL skills for data transformation, querying and optimization in BigQuery, with a focus on cost, time-effective SQL coding and concurrency/data integrity (ideally in BigQuery dialect) Data Transformation/ETL/ELT pipelines development, testing and implementation ideally in Big Query Experience in working in DataOps model Experience in Data Vault modelling and usage. Proficiency in Git usage for version control and collaboration. Proficiency with CI/CD processes/pipelines designing, creation, maintenance in DevOps tools like Ansible/Jenkins etc. for Cloud Based Applications (Ideally GCP)

Posted 1 hour ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Combine interface design concepts with digital design and establish milestones to encourage cooperation and teamwork. Develop overall concepts for improving the user experience within a business webpage or product, ensuring all interactions are intuitive and convenient for customers. Collaborate with back-end web developers and programmers to improve usability. Conduct thorough testing of user interfaces in multiple platforms to ensure all designs render correctly and systems function properly. Converting the jobs from Talend ETL to Python and convert Lead SQLS to Snowflake. Developers with Python and SQL Skills. Developers should be proficient in Python (especially Pandas, PySpark, or Dask) for ETL scripting, with strong SQL skills to translate complex queries. They need expertise in Snowflake SQL for migrating and optimizing queries, as well as experience with data pipeline orchestration (e.g., Airflow) and cloud integration for automation and data loading. Familiarity with data transformation, error handling, and logging is also essential.

Posted 1 hour ago

Apply

6.0 - 11.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Graduate with a minimum of 6+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.

Posted 1 hour ago

Apply

5.0 - 10.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Graduate with a minimum of 5+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Preferred technical and professional experience Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.

Posted 1 hour ago

Apply

Exploring Data Transformation Jobs in India

India has seen a significant rise in the demand for data transformation professionals in recent years. With the increasing importance of data in business decision-making, companies across various industries are actively seeking skilled individuals who can transform raw data into valuable insights. If you are considering a career in data transformation in India, here is a comprehensive guide to help you navigate the job market.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi NCR
  4. Hyderabad
  5. Pune

These cities are known for their thriving tech industries and have a high demand for data transformation professionals.

Average Salary Range

The average salary range for data transformation professionals in India varies based on experience levels. Entry-level positions typically start at INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

A typical career path in data transformation may include roles such as Data Analyst, Data Engineer, Data Scientist, and Data Architect. As professionals gain experience and expertise, they may progress to roles like Senior Data Scientist, Lead Data Engineer, and Chief Data Officer.

Related Skills

In addition to data transformation skills, professionals in this field are often expected to have knowledge of programming languages (such as Python, R, or SQL), data visualization tools (like Tableau or Power BI), statistical analysis, and machine learning techniques.

Interview Questions

  • What is data transformation and why is it important? (basic)
  • How do you handle missing data during the transformation process? (basic)
  • Can you explain the difference between ETL and ELT? (medium)
  • How do you ensure the quality and accuracy of transformed data? (medium)
  • Describe a data transformation project you worked on and the challenges you faced. (medium)
  • What are the benefits of using data transformation tools like Apache Spark or Talend? (advanced)
  • How would you optimize a data transformation process for large datasets? (advanced)
  • Explain the concept of data lineage and its significance in data transformation. (advanced)

Closing Remark

As the demand for data transformation professionals continues to rise in India, now is a great time to explore opportunities in this field. By honing your skills, gaining relevant experience, and preparing for interviews, you can position yourself for a successful career in data transformation. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies