Jobs
Interviews

1502 Talend Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

20 - 35 Lacs

Hyderabad

Work from Office

Role:-Data Analyst Exp:- 6-11 Yrs Location:-Hyderabad Primary Skills:- ETL,Informatica,Python, SQL,BI tools and Investment Domain Please share your resumes to rajamahender.n@technogenindia.com , Job Description:- The Minimum Qualifications Education: Bachelors or Masters degree in Data Science, Statistics, Mathematics, Computer Science, Actuarial Science, or related field. Experience: 7-9 years of experience as a Data Analyst, with at least 5 years supporting Finance within the insurance industry. Hands-on experience with Vertica/Teradata for querying, performance optimization, and large-scale data analysis. Advanced SQL skills: proficiency in Python is a strong plus. Proven ability to write detailed source-to-target mapping documents and collaborate with technical teams on data integration. Experience working in hybrid onshore-offshore team environments. Deep understanding of data modelling concepts and experience working with relational and dimensional models. Strong communication skills with the ability to clearly explain technical concepts to non-technical audiences. A strong understanding of statistical concepts, probability and accounting standards, financial statements (balance sheet, income statement, cash flow statement), and financial ratios. Strong understanding of life insurance products and business processes across the policy lifecycle. Investment Principles: Knowledge of different asset classes, investment strategies, and financial markets. Quantitative Finance: Understanding of financial modelling, risk management, and derivatives. Regulatory Framework: Awareness of relevant financial regulations and compliance requirements.

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Evernorth Evernorth℠ exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care, we solve the problems others don’t, won’t or can’t. Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference. We are always looking upward. And that starts with finding the right talent to help us get there. Position Overview Excited to grow your career? This position’s primary responsibility will be to translate software requirements into functions using Mainframe , ETL , Data Engineering with expertise in Databricks and Database technologies. This position offers the opportunity to work on modernizing legacy systems, contribute to cloud infrastructure automation, and support production systems in a fast-paced, agile environment. You will work across multiple teams and technologies to ensure reliable, high-performance data solutions that align with business goals. As a Mainframe & ETL Engineer, you will be responsible for the end-to-end development and support of data processing solutions using tools such as Talend, Ab Initio, AWS Glue, and PySpark, with significant work on Databricks and modern cloud data platforms. You will support infrastructure provisioning using Terraform, assist in modernizing legacy systems including mainframe migration, and contribute to performance tuning of complex SQL queries across multiple database platforms including Teradata, Oracle, Postgres, and DB2. You will also be involved in CI/CD practices Responsibilities Support, maintain and participate in the development of software utilizing technologies such as COBOL, DB2, CICS and JCL. Support, maintain and participate in the ETL development of software utilizing technologies such as Talend, Ab-Initio, Python, PySpark using Databricks. Work with Databricks to design and manage scalable data processing solutions. Implement and support data integration workflows across cloud (AWS) and on-premises environments. Support cloud infrastructure deployment and management using Terraform. Participate in the modernization of legacy systems, including mainframe migration. Perform complex SQL queries and performance tuning on large datasets. Contribute to CI/CD pipelines, version control, and infrastructure automation. Provide expertise, tools, and assistance to operations, development, and support teams for critical production issues and maintenance Troubleshoot production issues, diagnose the problem, and implement a solution - First line of defense in finding the root cause Work cross-functionally with the support team, development team and business team to efficiently address customer issues. Active member of high-performance software development and support team in an agile environment Engaged in fostering and improving organizational culture. Qualifications Required Skills: Strong analytical and technical skills. Proficiency in Databricks – including notebook development, Delta Lake, and Spark-based process. Experience with mainframe modernization or migrating legacy systems to modern data platforms. Strong programming skills, particularly in PySpark for data processing. Familiarity with data warehousing concepts and cloud-native architecture. Solid understanding of Terraform for managing infrastructure as code on AWS. Familiarity with CI/CD practices and tools (e.g., Git, Jenkins). Strong SQL knowledge on OLAP DB platforms (Teradata, Snowflake) and OLTP DB platforms (Oracle, DB2, Postgres, SingleStore). Strong experience with Teradata SQL and Utilities Strong experience with Oracle, Postgres and DB2 SQL and Utilities Develop high quality database solutions Ability to do extensive analysis on complex SQL processes and design skills Ability to analyze existing SQL queries for performance improvements Experience in software development phases including design, configuration, testing, debugging, implementation, and support of large-scale, business centric and process-based applications Proven experience working with diverse teams of technical architects, business users and IT areas on all phases of the software development life cycle. Exceptional analytical and problem-solving skills Structured, methodical approach to systems development and troubleshooting Ability to ramp up fast on a system architecture Experience in designing and developing process-based solutions or BPM (business process management) Strong written and verbal communication skills with the ability to interact with all levels of the organization. Strong interpersonal/relationship management skills. Strong time and project management skills. Familiarity with agile methodology including SCRUM team leadership. Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Desire to work in application support space Passion for learning and desire to explore all areas of IT. Required Experience & Education Minimum of 8-12 years of experience in application development role. Bachelor’s degree equivalent in Information Technology, Business Information Systems, Technology Management, or related field of study. Location & Hours of Work: Hyderabad and Hybrid (13:00 AM IST to 10:00 PM IST) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 1 week ago

Apply

7.0 - 10.0 years

36 - 48 Lacs

Hyderabad

Work from Office

Must to have : Talend Expertise Proficiency with Talend Studio (including Open Studio, Cloud, Big Data, MDM, DI, DQ) Good to have: Snowflake

Posted 1 week ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

The purpose of this role is to oversee the development of our database marketing solutions, using database technologies such as Microsoft SQL Server/Azure, Amazon Redshift, Google BigQuery. The role will be involved in design, development, troubleshooting, and issue resolution. The role involves upgrading, enhancing, and optimizing the technical solution. It involves continuous integration and continuous deployment of various requirements changes in the business logic implementation. Interactions with internal stakeholders and/or clients to explain technology solutions and a clear understanding of client’s business requirements through which to guide optimal design/solution to meet their needs. The ability to communicate to both technical and non-technical audiences is key. Job Description: Must Have Skills: Database (SQL server / Snowflake / Teradata / Redshift / Vertica / Oracle / Big query / Azure DW etc. ETL (Extract, Transform, Load) tool (Talend, Informatica, SSIS, DataStage, Matillion) Python, UNIX shell scripting, Project & resource management Workflow Orchestration (Tivoli, Tidal, Stonebranch) Client-facing skills Good to have Skills: Experience in Cloud computing (one or more of AWS, Azure, GCP) . AWS Preferred. Key responsibilities: Understanding and practical knowledge of data warehouse, data mart, data modelling, data structures, databases, and data ingestion and transformation Strong understanding of ETL processes as well as database skills and common IT offerings i.e. storage, backups and operating system. Has a strong understanding of the SQL and data base programming language Has strong knowledge of development methodologies and tools Contribute to design and oversees code reviews for compliance with development standards Designs and implements technical vision for existing clients Able to convert documented requirements into technical solutions and implement the same in given timeline with quality issues. Able to quickly identify solutions for production failures and fix them. Document project architecture, explain detailed design to team and create low level to high level design. Perform mid to complex level tasks independently. Support Client, Data Scientists and Analytical Consultants working on marketing solution. Work with cross functional internal team and external clients . Strong project Management and organization skills . Ability to lead/work 1 – 2 projects of team size 2 – 3 team members. Code management systems which include Code review and deployments Location: DGS India - Pune - Baner M- Agile Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 1 week ago

Apply

7.0 years

8 - 9 Lacs

Thiruvananthapuram

On-site

7 - 9 Years 4 Openings Trivandrum Role description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes: Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures of Outcomes: TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Outputs Expected: Code: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation: Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure: Define and govern the configuration management plan. Ensure compliance from the team. Test: Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance: Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project: Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects: Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate: Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release: Execute and monitor the release process. Design: Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface with Customer: Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team: Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications: Obtain relevant domain and technology certifications. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples: Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments: We are seeking a highly experienced Senior Data Engineer to design, develop, and optimize scalable data pipelines in a cloud-based environment. The ideal candidate will have deep expertise in PySpark, SQL, Azure Databricks, and experience with either AWS or GCP. A strong foundation in data warehousing, ELT/ETL processes, and dimensional modeling (Kimball/star schema) is essential for this role. Must-Have Skills 8+ years of hands-on experience in data engineering or big data development. Strong proficiency in PySpark and SQL for data transformation and pipeline development. Experience working in Azure Databricks or equivalent Spark-based cloud platforms. Practical knowledge of cloud data environments – Azure, AWS, or GCP. Solid understanding of data warehousing concepts, including Kimball methodology and star/snowflake schema design. Proven experience designing and maintaining ETL/ELT pipelines in production. Familiarity with version control (e.g., Git), CI/CD practices, and data pipeline orchestration tools (e.g., Airflow, Azure Data Factory Skills Azure Data Factory,Azure Databricks,Pyspark,Sql About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Noida

On-site

5 - 7 Years 2 Openings Noida Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Skills Cloud Platforms ( AWS, MS Azure, GC etc.) Containerization and Orchestration ( Docker, Kubernetes etc..) APIs - Change APIs to APIs development Data Pipeline construction using languages like Python, PySpark, and SQL Data Streaming (Kafka and Azure Event Hub etc..) Data Parsing ( Akka and MinIO etc..) Database Management ( SQL and NoSQL, including Clickhouse, PostgreSQL etc..) Agile Methodology ( Git, Jenkins, or Azure DevOps etc..) JS like Connectors/ framework for frontend/backend Collaboration and Communication Skills Aws Cloud,Azure Cloud,Docker,Kubernetes About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Director – Data Presales Architect Location: Greater Noida Experience Required: 10-15 years Role Overview: We are seeking a highly skilled and experienced professional to lead and support our data warehousing and data center architecture initiatives. The ideal candidate will have deep expertise in Data Warehousing, Data Lakes, Data Integration, and Data Governance , with hands-on experience in ETL tools and cloud platforms such as AWS, Azure, GCP, and Snowflake . This role demands strong presales experience , technical leadership, and the ability to manage complex enterprise deals across multiple geographies. Key Responsibilities: Architect and design scalable Data Warehousing and Data Lake solutions Lead presales engagements, including RFP/RFI/RFQ lifecycle management Create and present compelling proposals and solution designs to clients Collaborate with cross-functional teams to deliver end-to-end solutions Estimate efforts and resources for customer requirements Drive Managed Services opportunities and enterprise deal closures Engage with clients across MEA, APAC, US, and UK regions Ensure alignment of solutions with business goals and technical requirements Maintain high standards of documentation and presentation for client-facing materials Must-Have: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field Certifications in AWS, Azure, GCP, or Snowflake are a plus Experience working in consulting or system integrator environments Strong knowledge of Data Warehousing, Data Lakes, Data Integration, and Data Governance Hands-on experience with ETL tools (e.g., Informatica, Talend, etc.) Exposure to c loud environments: AWS, Azure, GCP, Snowflake Minimum 2 years of presales experience with understanding of presales operating processes Experience in enterprise-level deals and Managed Services Proven ability to handle multi-geo engagements Excellent presentation and communication skills Strong understanding of effort estimation techniques for customer requirements

Posted 1 week ago

Apply

0.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Designation: Senior Analyst Level: L2 Experience: 4 to 7 years Location: Chennai Job Description: We are seeking a highly skilled and motivated Senior Data Quality Analyst (DQA) who is responsible for ensuring the accuracy, completeness, and reliability of an organization’s data, enabling informed decision-making. The ideal candidate works with various Business stakeholders to understand business requirements and define data quality standards, developing and enforcing data validation procedures to ensure compliance with the company’s data standards. Responsibilities: Data Quality Monitoring & Validation (40% of Time): Profile Data: Identify anomalies (missing values, duplicates, outliers) Run Data Quality Checks: Validate against business rules. Automate Checks: Schedule scripts (SQL/Python) to flag issues in real time. Issue Resolution & Root Cause Analysis (30% of Time): Triage Errors: Work with IT/data engineers to fix corrupt data Track Defects: Log issues in Jira/Snowflake and prioritize fixes. Root Cause Analysis: Determine if issues stem from ETL bugs, user input, or system failures. Governance & Documentation (20% of Time): Ensuring compliance with data governance frameworks Metadata Management: Document data lineage. Compliance Audits: Ensure adherence to GDPR, HIPAA, or internal policies. Implementing data quality standards and policies Stakeholder Collaboration (10% of Time): Train Teams: Educate data citizens, data owners, data stewards on data quality best practices. Monitoring and reporting on data quality metrics including Reports to Leaderships. Skills: Technical Skills Knowledge of data quality tools and data profiling techniques (e.g., Talend, Informatica, Ataccama, DQOPS, Open Source tool) Familiarity with database management systems and data governance initiatives Proficiency in SQL and data management principles Experience with data integration and ETL tools Understanding of data visualization tools and techniques Knowledge of data governance and metadata management Familiarity with Python/R for automation and scripting Analytical Skills Strong analytical and problem-solving skills Ability to identify data patterns and trends Understanding of statistical analysis and data quality metrics Experience with data cleansing and data validation techniques including data remediation Ability to assess data quality and identify areas needing improvement Experience with conducting data audits and implementing data quality processes Ability to document data quality rules and procedures Job Snapshot Updated Date 25-07-2025 Job ID J_3911 Location Chennai, Tamil Nadu, India Experience 4 - 7 Years Employee Type Permanent

Posted 1 week ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Greetings from Synergy Resource Solutions, a leading Recruitment Consultancy. Our Client is an ISO 27001:2013 AND ISO 9001 Certified company, and pioneer in web design and development company from India. Company has also been voted as the Top 10 mobile app development companies in India. Company is leading IT Consulting and web solution provider for custom software, website, games, custom web application, enterprise mobility, mobile apps and cloud-based application design & development. Company is ranked one of the fastest growing web design and development company in India, with 3900+ successfully delivered projects across United States, UK, UAE, Canada and other countries. Over 95% of client retention rate demonstrates their level of services and client satisfaction. Position : Senior Data Engineer Experience : 5+ Years relevant experience Education Qualification : Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. Job Location : Ahmedabad Shift : 11 AM – 8.30 PM Key Responsibilities: Our client seeking an experienced and motivated Senior Data Engineer to join their AI & Automation team. The ideal candidate will have 5–8 years of experience in data engineering, with a proven track record of designing and implementing scalable data solutions. A strong background in database technologies, data modeling, and data pipeline orchestration is essential. Additionally, hands-on experience with generative AI technologies and their applications in data workflows will set you apart. In this role, you will lead data engineering efforts to enhance automation, drive efficiency, and deliver data driven insights across the organization. Job Description: • Design, build, and maintain scalable, high-performance data pipelines and ETL/ELT processes across diverse database platforms. • Architect and optimize data storage solutions to ensure reliability, security, and scalability. • Leverage generative AI tools and models to enhance data engineering workflows, drive automation, and improve insight generation. • Collaborate with cross-functional teams (Data Scientists, Analysts, and Engineers) to understand and deliver on data requirements. • Develop and enforce data quality standards, governance policies, and monitoring systems to ensure data integrity. • Create and maintain comprehensive documentation for data systems, workflows, and models. • Implement data modeling best practices and optimize data retrieval processes for better performance. • Stay up-to-date with emerging technologies and bring innovative solutions to the team. Qualifications: • Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. • 5–8 years of experience in data engineering, designing and managing large-scale data systems. Strong expertise in database technologies, including: The mandatory skills are as follows: SQL NoSQL (MongoDB or Cassandra, or CosmosDB) One of the following : Snowflake or Redshift or BigQuery or Microsft Fabrics Azure • Hands-on experience implementing and working with generative AI tools and models in production workflows. • Proficiency in Python and SQL, with experience in data processing frameworks (e.g., Pandas, PySpark). • Experience with ETL tools (e.g., Apache Airflow, MS Fabric, Informatica, Talend) and data pipeline orchestration platforms. • Strong understanding of data architecture, data modeling, and data governance principles. • Experience with cloud platforms (preferably Azure) and associated data services. Skills: • Advanced knowledge of Database Management Systems and ETL/ELT processes. • Expertise in data modeling, data quality, and data governance. • Proficiency in Python programming, version control systems (Git), and data pipeline orchestration tools. • Familiarity with AI/ML technologies and their application in data engineering. • Strong problem-solving and analytical skills, with the ability to troubleshoot complex data issues. • Excellent communication skills, with the ability to explain technical concepts to non-technical stakeholders. • Ability to work independently, lead projects, and mentor junior team members. • Commitment to staying current with emerging technologies, trends, and best practices in the data engineering domain. If your profile is matching with the requirement & if you are interested for this job, please share your updated resume with details of your present salary, expected salary & notice period.

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

About Forsys: Forsys Inc. is a leading company specializing in Lead-to-Revenue transformation, utilizing a combination of strategy, technology, and business transformation to foster growth. The company boasts a team of over 500 professionals dispersed across various locations such as the US, India, UK, Colombia, and Brazil, with its headquarters situated in the Bay Area. Forsys is renowned for its commitment to innovation and excellence. As an implementation partner for major vendors like Conga, Salesforce, and Oracle, as well as an incubator for groundbreaking ideas and solutions, Forsys holds a unique position within the consulting industry. The company is dedicated to empowering its clients by uncovering new revenue streams and cultivating a culture of innovation. To learn more about our vision and the impact we are making, visit forsysinc.com. Data Migration Technical Lead: Forsys is currently seeking a full-time Data Migration Technical Lead who is a proficient Salesforce Revenue Cloud Data Migration Specialist. In this role, you will be responsible for overseeing and executing data migration activities as part of Revenue Cloud implementation and transformation projects. As a key member of the Forsys Data Migration team, you will analyze data from multiple source systems, consult with clients on data transformation, and manage end-to-end data and document migration processes. Responsibilities: - Possessing over 8 years of experience as a data migration technical lead, with a proven track record in handling complex migration projects. - Developing and implementing data migration strategies for Salesforce Revenue Cloud, including CPQ (Configure Price Quote), Billing, and related modules. - Collaborating with clients to assess their data requirements, creating data models, and establishing data mappings. - Evaluating source data quality, devising data cleansing strategies, and executing data cleaning processes as needed. - Building ETL/ELT pipelines using tools like Informatica, Talend, or native Salesforce tools. - Adhering to best practices for data migration and following established standards and protocols. - Assessing different source systems to determine optimal data transfer methods and managing large volumes of data effectively. - Designing and conducting data validation procedures pre and post-migration, and generating Data Reconciliation reports. - Implementing testing protocols to ensure data accuracy and consistency with client specifications. - Providing technical support throughout the data migration process to ensure efficiency and smooth operation. - Creating comprehensive documentation of the migration process to guide future projects. - Mentoring team members and fostering collaboration to achieve project deliverables effectively. - Demonstrating the ability to perform effectively in high-pressure environments. Eligibility: - Minimum of 8 years of experience in data migration or ETL roles, with at least 2 years focusing on Salesforce Revenue Cloud (CPQ + Billing). - Proficiency in utilizing ETL Tools such as Pentaho, Mulesoft, Informatica, Data Stage, SSIS, etc. - Strong understanding of the Salesforce data model and experience in various phases of Data Migration. - Advanced SQL skills, familiarity with APIs, and integration patterns. - Experience in data/process mapping for Data Migrations involving Salesforce, Oracle, and Legacy systems is preferred. - Extensive experience working with different databases and SQL queries. - Knowledge of Supply Chain/CRM/Quote to Cash/Quote to Order business processes. - Proficiency in handling various data formats (XML, JSON, etc.). - Expertise in SOAP & REST Services, API implementation, and Cloud services. - Strong communication skills, ability to work effectively in teams, both onshore and offshore, and driven to achieve goals. - Self-motivated, goal-oriented individuals with strong analytical and problem-solving skills. - Prior experience with source systems such as NetSuite, SAP, Zuora, or Oracle for migration to Salesforce Revenue Cloud is advantageous.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a seasoned Senior ETL/DB Tester, you will be responsible for designing, developing, and executing comprehensive test plans to validate ETL and database processes. Your expertise in SQL and experience with tools like Talend, ADF, Snowflake, and Power BI will be crucial in ensuring data integrity and accuracy across modern data platforms. Your analytical skills, attention to detail, and ability to collaborate with cross-functional teams in a fast-paced data engineering environment will be key in this role. Your main responsibilities will include validating data transformations and integrity, performing manual testing and defect tracking using tools like Zephyr or Tosca, analyzing business and data requirements for test coverage, and writing complex SQL queries for data reconciliation. You will also be expected to identify data-related issues, conduct root cause analysis in collaboration with developers, and track bugs and enhancements using appropriate tools. In addition, you will optimize testing strategies for performance, scalability, and accuracy in ETL processes. Your skills in ETL tools like Talend, ADF, data platforms like Snowflake, and reporting/analytics tools such as Power BI and VPI will be essential for success in this role. Your expertise in API testing and advanced features of Power BI like Dashboards, DAX, and Data Modelling will further strengthen your testing capabilities. Overall, your role as a Senior ETL/DB Tester will require a combination of technical skills, testing proficiency, and collaboration with various teams to ensure the reliability and accuracy of data processes across different data platforms and tools.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

maharashtra

On-site

The role of Digital Finance Manager in India requires you to act as the liaison between the Finance and IT departments. Your primary responsibility will involve utilizing tools like SAP, Power BI, Alteryx, and RPA platforms to identify and execute finance automation projects that are in line with the business requirements. Your key responsibilities will include driving the Digital Finance India Agenda by implementing best practices, evaluating digital technologies, engaging with finance and business stakeholders, streamlining financial processes through automation, and ensuring that finance operations are future-ready with minimal manual intervention. As the Digital Finance Manager, you will play a crucial role in bridging Finance sub-functions with IT services, identifying opportunities for process improvement, and overseeing the successful delivery of Finance IBS projects within the specified timelines and budgets. You will focus on identifying automation opportunities across finance processes, leading end-to-end project delivery, and driving process redesign and software configuration aligned with security and compliance standards. It is important to note that this role requires a strong finance acumen in addition to IT skills. You should have the ability to understand financial reporting, controls, compliance, and analysis needs while integrating digital solutions effectively. Key responsibilities also include developing and implementing digital strategies for Finance India, evaluating and implementing finance automation opportunities, delivering data transformation and visualization solutions, managing digital finance projects, evaluating current finance processes for automation, and training finance teams on emerging tools and technologies. To qualify for this role, you should have a CA or MBA from a reputed university with 8-10 years of progressive experience in finance transformation, analysis, reporting, and forecasting. Demonstrated expertise in digital tools such as SAP, Power BI, RPA, and hands-on experience in data engineering and analytics tools like Alteryx is required. Exposure to finance transformation or consulting, particularly in the FMCG industry, will be an added advantage. If you are looking for a challenging yet rewarding opportunity to drive digital finance initiatives in a leading FMCG company, this role offers a platform to showcase your skills and contribute to the growth and success of the organization.,

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration Define and govern the configuration management plan. Ensure compliance within the team. Testing Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management Manage the delivery of modules effectively. Defect Management Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation Create and provide input for effort and size estimation for projects. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management Execute and monitor the release process to ensure smooth transitions. Design Contribution Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments Skills Cloud Platforms ( AWS, MS Azure, GC etc.) Containerization and Orchestration ( Docker, Kubernetes etc..) APIs - Change APIs to APIs development Data Pipeline construction using languages like Python, PySpark, and SQL Data Streaming (Kafka and Azure Event Hub etc..) Data Parsing ( Akka and MinIO etc..) Database Management ( SQL and NoSQL, including Clickhouse, PostgreSQL etc..) Agile Methodology ( Git, Jenkins, or Azure DevOps etc..) JS like Connectors/ framework for frontend/backend Collaboration and Communication Skills Aws Cloud,Azure Cloud,Docker,Kubernetes

Posted 1 week ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Kochi

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 week ago

Apply

3.0 - 7.0 years

2 - 10 Lacs

India

Remote

Job Title: ETL Automation Tester (SQL, Python, Cloud) Location: [On-site / Remote / Hybrid – City, State or “Anywhere, USA”] Employment Type: [Full-time / Contract / C2C / Part Time ] NOTE : Candidate has to work US Night Shifts Job Summary: We are seeking a highly skilled ETL Automation Tester with expertise in SQL , Python scripting , and experience working with Cloud technologies such as Azure, AWS, or GCP . The ideal candidate will be responsible for designing and implementing automated testing solutions to ensure the accuracy, performance, and reliability of ETL pipelines and data integration processes. Key Responsibilities: Design and implement test strategies for ETL processes and data pipelines. Develop automated test scripts using Python and integrate them into CI/CD pipelines. Validate data transformations and data integrity across source, staging, and target systems. Write complex SQL queries for test data creation, validation, and result comparison. Perform cloud-based testing on platforms such as Azure Data Factory, AWS Glue, or GCP Dataflow/BigQuery. Collaborate with data engineers, analysts, and DevOps teams to ensure seamless data flow and test coverage. Log, track, and manage defects through tools like JIRA, Azure DevOps, or similar. Participate in performance and volume testing for large-scale datasets. Required Skills and Qualifications: 3–7 years of experience in ETL/data warehouse testing. Strong hands-on experience in SQL (joins, CTEs, window functions, aggregation). Proficient in Python for automation scripting and data manipulation. Solid understanding of ETL tools such as Informatica, Talend, SSIS, or custom Python-based ETL. Experience with at least one Cloud Platform : Azure : Data Factory, Synapse, Blob Storage AWS : Glue, Redshift, S3 GCP : Dataflow, BigQuery, Cloud Storage Familiarity with data validation , data quality , and data profiling techniques. Experience with CI/CD tools such as Jenkins, GitHub Actions, or Azure DevOps. Excellent problem-solving, communication, and documentation skills. Preferred Qualifications: Knowledge of Apache Airflow , PySpark , or Databricks . Experience with containerization (Docker) and orchestration tools (Kubernetes). ISTQB or similar testing certification. Familiarity with Agile methodologies and Scrum ceremonies . Job Types: Part-time, Contractual / Temporary, Freelance Contract length: 6 months Pay: ₹18,074.09 - ₹86,457.20 per month Expected hours: 40 per week Benefits: Work from home

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai

On-site

Talend - Designing, developing, and documenting existing Talend ETL processes, technical architecture, data pipelines, and performance scaling using tools to integrate Talend data and ensure data quality in a big data environment. Snowflake SQL Writing SQL queries against Snowflake Developing scripts Unix, Python, etc. to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Perform data analysis, troubleshoot data issues, and provide technical support to end-users. Develop and maintain data warehouse and ETL processes, ensuring data quality and integrity. Complex problem-solving capability and ever improvement approach. Desirable to have Talend / Snowflake Certification Excellent SQL coding skills Excellent communication & documentation skills. Familiar with Agile delivery process. Must be analytic, creative and self-motivated. Work Effectively within a global team environment. Excellent Communication skills About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures Of Outcomes TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Code Outputs Expected: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure Define and govern the configuration management plan. Ensure compliance from the team. Test Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Execute and monitor the release process. Design Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface With Customer Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications Obtain relevant domain and technology certifications. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments We are seeking a highly experienced Senior Data Engineer to design, develop, and optimize scalable data pipelines in a cloud-based environment. The ideal candidate will have deep expertise in PySpark, SQL, Azure Databricks, and experience with either AWS or GCP. A strong foundation in data warehousing, ELT/ETL processes, and dimensional modeling (Kimball/star schema) is essential for this role. Must-Have Skills 8+ years of hands-on experience in data engineering or big data development. Strong proficiency in PySpark and SQL for data transformation and pipeline development. Experience working in Azure Databricks or equivalent Spark-based cloud platforms. Practical knowledge of cloud data environments – Azure, AWS, or GCP. Solid understanding of data warehousing concepts, including Kimball methodology and star/snowflake schema design. Proven experience designing and maintaining ETL/ELT pipelines in production. Familiarity with version control (e.g., Git), CI/CD practices, and data pipeline orchestration tools (e.g., Airflow, Azure Data Factory Skills Azure Data Factory,Azure Databricks,Pyspark,Sql

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Hi, Greetings from Peoplefy Infosolutions !!! We are hiring for one of our reputed MNC client based in Pune. We are looking for candidates with 10+ years of experience who is currently working as a Data Architect. Job Description: We are seeking a highly skilled and experienced Cloud Data Architect to design, implement, and manage scalable, secure, and efficient cloud-based data solutions. The ideal candidate will possess a strong combination of technical expertise, analytical skills, and the ability to collaborate effectively with cross-functional teams to translate business requirements into technical solutions. Key Responsibilities: Design and implement data architectures, including data pipelines, data lakes, and data warehouses, on cloud platforms. Develop and optimize data models (e.g., star schema, snowflake schema) to support business intelligence and analytics. Leverage big data technologies (e.g., Hadoop, Spark, Kafka) to process and analyze large-scale datasets. Manage and optimize relational and NoSQL databases for performance and scalability. Develop and maintain ETL/ELT workflows using tools like Apache NiFi, Talend, or Informatica. Ensure data security and compliance with regulations such as GDPR and CCPA. Automate infrastructure deployment using CI/CD pipelines and Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Collaborate with analytics teams to integrate machine learning frameworks and visualization tools (e.g., Tableau, Power BI). Provide technical leadership and mentorship to team members. Interested candidates for above position kindly share your CVs on sneh.ne@peoplefy.com

Posted 1 week ago

Apply

6.0 years

0 Lacs

Delhi, India

Remote

Job Title: Senior Data Modeler Experience Required: 6+ Years Location: Remote Employment Type: Full-time / Contract (Remote) Domain: Data Engineering / Analytics / Data Warehousing --- Job Summary: We are seeking an experienced and detail-oriented Data Modeler with a strong background in conceptual, logical, and physical data modeling. The ideal candidate will have in-depth knowledge of Snowflake architecture, data modeling best practices (Star/Snowflake schema), and advanced SQL scripting. You will be responsible for designing robust, scalable data models and working closely with data engineers, analysts, and business stakeholders. --- Key Responsibilities: 1. Data Modeling: Design Conceptual, Logical, and Physical Data Models. Create and maintain Star and Snowflake Schemas for analytical reporting. Perform Normalization and Denormalization based on performance and reporting requirements. Work closely with business stakeholders to translate requirements into optimized data structures. Maintain data model documentation and data dictionary. 2. Snowflake Expertise: Design and implement Snowflake schemas with optimal partitioning and clustering strategies. Perform performance tuning for complex queries and storage optimization. Implement Time Travel, Streams, and Tasks for data recovery and pipeline automation. Manage and secure data using Secure Views and Materialized Views. Optimize usage of Virtual Warehouses and storage costs. 3. SQL & Scripting: Write and maintain Advanced SQL queries including: Common Table Expressions (CTEs) Window Functions Recursive queries Build automation scripts for data loading, transformation, and validation. Troubleshoot and optimize SQL queries for performance and accuracy. Support data migration and integration projects. --- Required Skills & Qualifications: 6+ years of experience in Data Modeling and Data Warehouse design. Proven experience with Snowflake platform (min. 2 years). Strong hands-on experience in Dimensional Modeling (Star/Snowflake schemas). Expert in SQL and scripting for automation and performance optimization. Familiarity with tools like Erwin, PowerDesigner, or similar data modeling tools. Experience working in Agile/Scrum environments. Strong analytical and problem-solving skills. Excellent communication and stakeholder engagement skills. --- Preferred Skills (Nice to Have): Experience with ETL/ELT tools like dbt, Informatica, Talend, etc. Exposure to Cloud Platforms like AWS, Azure, or GCP. Familiarity with Data Governance and Data Quality frameworks.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Organization: Leading Global Management Consulting Organization( One of the BIG 3 Consulting Organization) Role :- Sr Data Architect Experience: - 10+ Yrs WHAT YOULL DO Define and design future state data architecture for HR reporting, forecasting and analysis products. Partner with Technology, Data Stewards and various Products teams in an Agile work stream while meeting program goals and deadlines. Engage with line of business, operations, and project partners to gather process improvements. Lead to design / build new models to efficiently deliver the financial results to senior management. Evaluate Data related tools and technologies and recommend appropriate implementation patterns and standard methodologies to ensure our Data ecosystem is always modern. Collaborate with Enterprise Data Architects in establishing and adhering to enterprise standards while also performing POCs to ensure those standards are implemented. Provide technical expertise and mentorship to Data Engineers and Data Analysts in the Data Architecture. Develop and maintain processes, standards, policies, guidelines, and governance to ensure that a consistent framework and set of standards is applied across the company. Create and maintain conceptual / logical data models to identify key business entities and visual relationships. Work with business and IT teams to understand data requirements. Maintain a data dictionary consisting of table and column definitions. Review data models with both technical and business audiences. YOU’RE GOOD AT Design, document & train the team on the overall processes and process flows for the Data architecture. Resolve technical challenges in critical situations that require immediate resolution. Develop relationships with external stakeholders to maintain awareness of data and security issues and trends. Review work from other tech team members and provide feedback for growth. Implement Data security policies that align with governance objectives and regulatory requirements. YOU BRING (EXPERIENCE & QUALIFICATIONS) Essential Education Minimum of a Bachelor's degree in Computer science, Engineering or a similar field Additional Certification in Data Management or cloud data platforms like Snowflake preferred Essential Experience & Job Requirements 12+ years of IT experience with major focus on data warehouse/database related projects Expertise in cloud databases like Snowflake, Redshift etc. Expertise in Data Warehousing Architecture; BI/Analytical systems; Data cataloguing; MDM etc Proficient in Conceptual, Logical, and Physical Data Modelling Proficient in documenting all the architecture related work performed. Proficient in data storage, ETL/ELT and data analytics tools like AWS Glue, DBT/Talend, FiveTran, APIs, Tableau, Power BI, Alteryx etc Experience in building Data Solutions to support Comp Benchmarking, Pay Transparency / Pay Equity and Total Rewards use cases preferred. Experience with Cloud Big Data technologies such as AWS, Azure, GCP and Snowflake a plus Experience working with agile methodologies (Scrum, Kanban) and Meta Scrum with cross-functional teams (Product Owners, Scrum Master, Architects, and data SMEs) a plus Excellent written, oral communication and presentation skills to present architecture, features, and solution recommendations is a must

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

panchkula, haryana

On-site

We are seeking a skilled and experienced Lead/Senior ETL Engineer with 4-8 years of experience to join our dynamic data engineering team. As a Lead/Sr. ETL Engineer, you will play a crucial role in designing and developing high-performing ETL solutions, managing data pipelines, and ensuring seamless integration across systems. Your expertise in ETL tools, cloud platforms, scripting, and data modeling principles will be pivotal in building efficient, scalable, and reliable data solutions for enterprise-level implementations. Key Skills: - Proficiency in ETL tools such as SSIS, DataStage, Informatica, or Talend. - In-depth understanding of Data Warehousing concepts, including Data Marts, Star/Snowflake schemas, and Fact & Dimension tables. - Strong experience with relational databases like SQL Server, Oracle, Teradata, DB2, or MySQL. - Solid scripting/programming skills in Python. - Hands-on experience with cloud platforms like AWS or Azure. - Knowledge of middleware architecture and enterprise data integration strategies. - Familiarity with reporting/BI tools such as Tableau and Power BI. - Ability to write and review high and low-level design documents. - Excellent communication skills and the ability to work effectively with cross-cultural, distributed teams. Roles and Responsibilities: - Design and develop ETL workflows and data integration strategies. - Collaborate with cross-functional teams to deliver enterprise-grade middleware solutions. - Coach and mentor junior engineers to support skill development and performance. - Ensure timely delivery, escalate issues proactively, and manage QA and validation processes. - Participate in planning, estimations, and recruitment activities. - Work on multiple projects simultaneously, ensuring quality and consistency in delivery. - Experience in Sales and Marketing data domains. - Strong problem-solving abilities with a data-driven mindset. - Ability to work independently and collaboratively in a fast-paced environment. - Prior experience in global implementations and managing multi-location teams is a plus. If you are a passionate Lead/Sr. ETL Engineer looking to make a significant impact in a dynamic environment, we encourage you to apply for this exciting opportunity. Thank you for considering a career with us. We look forward to receiving your application! For further inquiries, please contact us at careers@grazitti.com. Location: Panchkula, India,

Posted 1 week ago

Apply

6.0 years

12 - 18 Lacs

Delhi, India

Remote

Skills: Data Modeling, Snowflake, Schemas, Star Schema Design, SQL, Data Integration, Job Title: Senior Data Modeler Experience Required: 6+ Years Location: Remote Employment Type: Full-time / Contract (Remote) Domain: Data Engineering / Analytics / Data Warehousing Job Summary We are seeking an experienced and detail-oriented Data Modeler with a strong background in conceptual, logical, and physical data modeling. The ideal candidate will have in-depth knowledge of Snowflake architecture, data modeling best practices (Star/Snowflake schema), and advanced SQL scripting. You will be responsible for designing robust, scalable data models and working closely with data engineers, analysts, and business stakeholders. Key Responsibilities Data Modeling: Design Conceptual, Logical, and Physical Data Models. Create and maintain Star and Snowflake Schemas for analytical reporting. Perform Normalization and Denormalization based on performance and reporting requirements. Work closely with business stakeholders to translate requirements into optimized data structures. Maintain data model documentation and data dictionary. Snowflake Expertise: Design and implement Snowflake schemas with optimal partitioning and clustering strategies. Perform performance tuning for complex queries and storage optimization. Implement Time Travel, Streams, and Tasks for data recovery and pipeline automation. Manage and secure data using Secure Views and Materialized Views. Optimize usage of Virtual Warehouses and storage costs. SQL & Scripting: Write And Maintain Advanced SQL Queries Including Common Table Expressions (CTEs) Window Functions Recursive queries Build automation scripts for data loading, transformation, and validation. Troubleshoot and optimize SQL queries for performance and accuracy. Support data migration and integration projects. Required Skills & Qualifications 6+ years of experience in Data Modeling and Data Warehouse design. Proven experience with Snowflake platform (min. 2 years). Strong hands-on experience in Dimensional Modeling (Star/Snowflake schemas). Expert in SQL and scripting for automation and performance optimization. Familiarity with tools like Erwin, PowerDesigner, or similar data modeling tools. Experience working in Agile/Scrum environments. Strong analytical and problem-solving skills. Excellent communication and stakeholder engagement skills. Preferred Skills (Nice To Have) Experience with ETL/ELT tools like dbt, Informatica, Talend, etc. Exposure to Cloud Platforms like AWS, Azure, or GCP. Familiarity with Data Governance and Data Quality frameworks.

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

You will be responsible for preparing data, developing models, testing them, and deploying them. This includes designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models. Your role will involve ensuring that algorithms generate accurate user recommendations. Additionally, you will work on turning unstructured data into useful information by auto-tagging images and converting text to speech. Solving complex problems with multi-layered data sets and optimizing existing machine learning libraries and frameworks will be part of your daily tasks. Your responsibilities will also include developing machine learning algorithms to analyze large volumes of historical data for making predictions. You will run tests, perform statistical analysis, and interpret the results, documenting machine learning processes. As a Lead Engineer in ML and Data Engineering, you will oversee the technologies, tools, and techniques used within the team. Collaboration with the team based on business requirements for designing the requirements is essential. You will ensure that development standards, policies, and procedures are adhered to and drive change to implement efficient and effective strategies. Working closely with peers in the business to fully understand the business process and requirements is crucial. Maintenance, debugging, and problem-solving will also be part of your job responsibilities. Ensuring that all software developed within your team meets the business requirements specified and showing flexibility to respond to the changing needs of the business are key aspects of the role. Your technical skills should include 4+ years of experience in Python, API development using Flask/Django, and proficiency in libraries such as Pandas, Numpy, Keras, Scipy, Scikit-learn, PyTorch, Tensor Flow, and Theano. Hands-on experience in Machine Learning (Supervised & Unsupervised) and familiarity with Data Analytics Tools & Libraries are required. Experience in Cloud Data Pipelines and Engineering (Azure/AWS) as well as familiarity with ETL Pipelines/DataBricks/Apache NiFi/Kafka/Talend will be beneficial. Ability to work independently on projects, good written and verbal communication skills, and a Bachelor's Degree in Computer Science/Engineering/BCA/MCA are essential qualifications for this role. Desirable skills include 2+ years of experience in Java.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As an Associate Architect (IND) at Elevance Health, you will be responsible for designing and implementing scalable, high-performance ETL solutions for data ingestion, transformation, and loading. You will define and maintain data architecture standards, best practices, and governance policies while collaborating with data engineers, analysts, and business stakeholders to understand data requirements. Your role will involve optimizing existing ETL pipelines for performance, reliability, and scalability, ensuring data quality, consistency, and security across all data flows. In this position, you will lead the evaluation and selection of ETL tools and technologies, providing technical leadership and mentorship to junior data engineers. Additionally, you will be expected to document data flows, architecture diagrams, and technical specifications. It would be beneficial to have experience with Snowflake and Oracle. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Information Systems, or a related field, along with at least 8 years of experience in data engineering or ETL development. Strong expertise in ETL tools such as Informatica, Talend, Apache NiFi, SSIS, or similar is essential, as well as proficiency in SQL and experience with relational and NoSQL databases. Experience with cloud platforms like AWS, Azure, or Google Cloud, familiarity with data modeling, data warehousing, and big data technologies are also required. The ideal candidate will possess strong problem-solving and communication skills, along with good business communication skills. You should be committed, accountable, and able to communicate status to stakeholders in a timely manner. Collaboration and leadership skills are vital for this role, as you will be working with global teams. At Carelon, we promise a world of limitless opportunities to our associates, fostering an environment that promotes growth, well-being, purpose, and a sense of belonging. Our focus on learning and development, innovative culture, comprehensive rewards, and competitive benefits make Carelon an equal opportunity employer dedicated to delivering the best results for our customers. If you require reasonable accommodation during the application process, please request the Reasonable Accommodation Request Form. This is a full-time position based in Bangalore.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As an Infoscion, your primary responsibility will be to interface with the client for quality assurance issue resolution and ensure high customer satisfaction. You will be involved in understanding requirements, creating and reviewing designs, validating architecture, and delivering high levels of service offerings to clients in the technology domain. Your role will also include participating in project estimation, providing inputs for solution delivery, conducting technical risk planning, performing code reviews, and unit test plan reviews. Leading and guiding your teams towards developing optimized high-quality code deliverables, ensuring continual knowledge management, and adhering to organizational guidelines and processes are key aspects of your job. If you are passionate about building efficient programs and systems, and helping clients navigate their digital transformation journey, this is the perfect opportunity for you. In addition to the primary responsibilities, you are expected to have knowledge of more than one technology, understand the basics of architecture and design fundamentals, be familiar with testing tools, and have knowledge of agile methodologies. Understanding project life cycle activities on development and maintenance projects, estimation methodologies, quality processes, and the basics of business domain to comprehend business requirements are essential. Your analytical abilities, strong technical skills, good communication skills, and understanding of technology and domain will be crucial in this role. You should be able to demonstrate a sound understanding of software quality assurance principles, SOLID design principles, and modeling methods, as well as be aware of the latest technologies and trends. Excellent problem-solving, analytical, and debugging skills will be beneficial for excelling in this position. Preferred Skills: - Technology: Data Management - Data Integration - Talend,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies