Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 15.0 years
30 - 40 Lacs
Mumbai
Work from Office
Hi, We are having an opening Senior Manager-SAP BW4HANA + Azure Lead at our Mumbai location. Job Summary : We are looking for an experience profile with 10+ years of experience in SAP BW/ BW4HANA. We are searching for skilled lead data engineer and lead SAP BW / BW4HANA to join to lead our dynamic team. The role involves working closely with business stakeholders to understand business requirement and translating them into technical specifications and ensure successful deployment. Candidate has to drive the data engineering and SAP BW4HANA initiatives, follow best practice and design azure cloud landscape. Areas Of Responsibility : Major experience in end to end implementation of SAP data warehouse platform Major experience in end to end implementation of Azure data and analytics platform End to End Setup BW4HANA landscape Hands-on experience in BW application area like SD, MM , PP, VC , PM , FICO . Hands-on experience in new technology like HANA , SQL etc Strong knowledge in Azure platform development along with SAP data warehouse Knowledge in analytical platform is good to have. In depth knowledge of info-providers like CP, ADSO and Open ODS. Knowledge of ETL from SAP transactional systems. Hands on experience on BW ABAP/AMDP scripts used in routines / transformations or customer exits. Resolving issues in process chains and user reports. Developing Queries in B4Hana and analytics using Analysis for Office Knowledge BO report development is good to have. Preparation of technical document. Monitor the system performance and make adjustments as needed. Educational Qualification : BSc.IT, BSc.CS, BE. Specific Certification : Good to have - SAP BW4HANA , Azure Experience : Minimum 10 years. Skill (Functional & Behavioural): Good Communication Skill, Analytical ability
Posted 2 days ago
5.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Azure backend expert (ADLS, ADF and Azure SQL DW)4+Yrs/Immediate Joiners only One Azure backend expert (Strong SC or Specialist Senior) Should have hands-on experience of working with ADLS, ADF and Azure SQL DW Should have minimum 3 Years working experience of delivering Azure projects. Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models. Skills: adf,sql,adls,azure,azure sql dw
Posted 2 days ago
2.0 - 5.0 years
2 - 6 Lacs
Gurugram
Work from Office
An ETL Tester (4+ Years Must) is responsible for testing and validating the accuracy and completeness of data being extracted, transformed, and loaded (ETL) from various sources into the target systems. These target systems can be On Cloud or on Premise They work closely with ETL developers, Data engineers, data analysts, and other stakeholders to ensure the quality of data and the reliability of the ETL processes Understand Cloud architecture and design test strategies for data moving in and out of Cloud systems Roles And Responsibilities Strong in Data warehouse testing - ETL and BI Strong Database Knowledge Oracle/ SQL Server/ Teradata / Snowflake Strong SQL skills with experience in writing complex data validation SQLs Experience working in Agile environment Experience creating test strategy, release level test plan and test cases Develop and Maintain test data for ETL testing Design and Execute test cases for ETL processes and data integration Good Knowledge of Rally, Jira and HP ALM Experience in Automation testing and data validation using Python Document test results and communicate with stakeholders on the status of ETL testing Skills: rally,agile environment,automation testing,data validation,jira,etl and bi,hp alm,etl testing,test strategy,data warehouse,data integration testing,test case creation,python,oracle/ sql server/ teradata / snowflake,sql,data warehouse testing,database knowledge,test data maintenance
Posted 2 days ago
2.0 - 5.0 years
2 - 6 Lacs
Hyderabad
Work from Office
An ETL Tester (4+ Years Must) is responsible for testing and validating the accuracy and completeness of data being extracted, transformed, and loaded (ETL) from various sources into the target systems. These target systems can be On Cloud or on Premise They work closely with ETL developers, Data engineers, data analysts, and other stakeholders to ensure the quality of data and the reliability of the ETL processes Understand Cloud architecture and design test strategies for data moving in and out of Cloud systems Roles And Responsibilities Strong in Data warehouse testing - ETL and BI Strong Database Knowledge Oracle/ SQL Server/ Teradata / Snowflake Strong SQL skills with experience in writing complex data validation SQLs Experience working in Agile environment Experience creating test strategy, release level test plan and test cases Develop and Maintain test data for ETL testing Design and Execute test cases for ETL processes and data integration Good Knowledge of Rally, Jira and HP ALM Experience in Automation testing and data validation using Python Document test results and communicate with stakeholders on the status of ETL testing
Posted 2 days ago
5.0 - 9.0 years
4 - 7 Lacs
Gurugram
Work from Office
Primary Skills SQL (Advanced Level) SSAS (SQL Server Analysis Services) Multidimensional and/or Tabular Model MDX / DAX (strong querying capabilities) Data Modeling (Star Schema, Snowflake Schema) Secondary Skills ETL processes (SSIS or similar tools) Power BI / Reporting tools Azure Data Services (optional but a plus) Role & Responsibilities Design, develop, and deploy SSAS models (both tabular and multidimensional). Write and optimize MDX/DAX queries for complex business logic. Work closely with business analysts and stakeholders to translate requirements into robust data models. Design and implement ETL pipelines for data integration. Build reporting datasets and support BI teams in developing insightful dashboards (Power BI preferred). Optimize existing cubes and data models for performance and scalability. Ensure data quality, consistency, and governance standards. Top Skill Set SSAS (Tabular + Multidimensional modeling) Strong MDX and/or DAX query writing SQL Advanced level for data extraction and transformations Data Modeling concepts (Fact/Dimension, Slowly Changing Dimensions, etc.) ETL Tools (SSIS preferred) Power BI or similar BI tools Understanding of OLAP & OLTP concepts Performance Tuning (SSAS/SQL) Skills: analytical skills,etl processes (ssis or similar tools),collaboration,multidimensional expressions (mdx),power bi / reporting tools,sql (advanced level),sql proficiency,dax,ssas (multidimensional and tabular model),etl,data modeling (star schema, snowflake schema),communication,azure data services,mdx,data modeling,ssas,data visualization
Posted 2 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE_Risk Data Engineer/Leads Description – External Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 3-6 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable variety of use cases. Driving improvements in the reliability and frequency of data ingestion including increasing real-time coverage Support and enhancement of data ingestion infrastructure and pipelines. Designing and implementing data pipelines that will collect data from disparate sources across enterprise, and from external sources and deliver it to our data platform. Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulation data throughout our data flows, ensuring data is available at each stage in the data flow, and in the form needed for each system, service and customer along said data flow. Identifying and onboarding data sources using existing schemas and where required, conduction exploratory data analysis to investigate and provide solutions. Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must Have Skills. 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (DB: PLSQL) At least 4+ years of experience in Database Design and Dimension modelling using Oracle PLSQL. Should be experience of working PLSQL advanced concepts like ( Materialized views, Global temporary tables, Partitions, PLSQL Packages) Experience in SQL tuning, Tuning of PLSQL solutions, Physical optimization of databases. Experience in writing and tuning SQL scripts including- tables, views, indexes and Complex PLSQL objects including procedures, functions, triggers and packages in Oracle Database 11g or higher. Experience in developing ETL processes – ETL control tables, error logging, auditing, data quality etc. Should be able to implement reusability, parameterization workflow design etc. Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J) Strong analytical and critical thinking skills, with ability to identify and resolve issues in data pipelines and systems. Strong understanding of ETL methodologies and best practices. Collaborate with cross-functional teams to ensure successful implementation of solutions. Experience with OLAP, OLTP databases, and data structuring/modelling with understanding of key data points. Good to have: Experience of working in Financial Crime, Financial Risk and Compliance technology transformation domains. Certification on any cloud tech stack. Experience building and optimizing data pipelines on AWS glue or Oracle cloud. Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence and data ingestion pipelines for AI/ML use cases. Experience with data visualization (Power BI/Tableau) and SSRS. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 days ago
9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Job Description : We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze → silver → gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less
Posted 2 days ago
5.0 - 10.0 years
16 - 25 Lacs
Pune, Bengaluru
Hybrid
POSITION SUMMARY: Responsible for developing and maintaining the financial analytics datamart for the organization with opportunity to: work directly with the Finance leadership team on key strategic initiatives be involved with financial planning and analysis operations work with skilled professionals with diverse backgrounds and supporting multiple stakeholders The SQL Developer, Financial Planning will report to the Sr Manager, FP&A and will be dedicated to developing and ensuring alignment of all models and analytics with direct financial impact on the organization. In addition to managing the development and oversight of analytics data mart within Finance, this role will partner with teams outside of Finance that maintain responsibility for development and maintenance of analytics systems affecting their business unit or department. JOB FUNCTION AND RESPONSIBILITIES: Develop and maintain analytics datamart to accommodate needs of the Finance group Work with cross section of leadership throughout the organization to centralize data requirements for analytics and models forecasting originations volume and margins, capacity and staffing requirements, customer retention (recapture), portfolio performance and delinquency, prepayment, and advance forecasts Detailed duties will include developing and maintaining automated DataMart processes, creating and validating data used for analytics, and ensuring data and metric accuracy Contribute to multiple projects through all phases in a timely and effective manner through collaboration with various departments. Identify and implement process improvements and efficiencies Provide highest level of internal and external customer service Participate in gathering and understanding requirements and document processes existing and new QUALIFICATION: Minimum of 5 years of relevant experience in SQL (T-SQL/PL SQL) and understanding data lineage and governance, developing SQL stored procedures and ETL solutions is required In-depth understanding of RDBMS concepts Proficiency in developing SSIS / ETL packages and working with MS SQL Server Knowledge of Hyperion is a plus Bachelors degree in engineering or technology or equivalent experience required. Ability to work with teams, especially across locations and time zones, in execution of projects and identification/ resolution of deficiencies or opportunities for improvement Strong analytical ability and understanding of data lineage and process dependencies Strong written and verbal communication skills. Advanced proficiency with SQL and managing SQL processes and data structures Competencies in problem solving, creativity, organizational skills while showing good judgment and execution within a fast-paced environment WORK SCHEDULE OR TRAVEL REQUIREMENTS: Late Shift
Posted 2 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are committed to simplify HR processes through digital transformation and simplification. We believe in harnessing the technology to enhance the employee experience and drive organizational success. As an HR Digitization and Simplification Specialist, you will play a pivotal role in shaping our digital HR landscape and streamlining operations for maximum efficiency and effectiveness. Key Responsibilities: Digital HR Strategy Development and Implementation: Collaborate with cross-functional teams to develop and execute a comprehensive HR digitization and simplification strategy aligned with organizational goals. Identify opportunities to leverage technology for process optimization, automation, and enhanced data analytics. HR Systems Evaluation and Integration: Conduct thorough assessments of existing HR systems, tools, and platforms. Lead efforts to integrate and optimize HRIS, ATS, LMS, and other relevant software solutions. Ensure seamless data flow between systems to support unified HR operations. Process Streamlining and Standardization: Analyze current HR processes and identify areas for simplification and standardization. Develop and implement standardized workflows, ensuring consistency across the organization. Continuously monitor and refine processes to drive operational efficiency. Change Management and Training: Act as a change agent to promote a digital mindset within the HR team and across the organization. Develop and deliver training programs to upskill HR staff on new tools, systems, and processes. Compliance and Security: Ensure HR digitization efforts comply with relevant data protection laws and regulations. Implement security measures to safeguard sensitive HR information. Stakeholder Engagement and Communication: Collaborate with HR leadership to effectively communicate the benefits and progress of digitization initiatives to stakeholders. Foster a culture of transparency and open communication regarding HR digitization efforts. Qualifications Degree, preferably in HR, Business, engineering or other analytical and/or technology-related fields, with high academic achievement required; advanced degree preferred Preferred: Proficiency in HR technology platforms, such as [Oracle HCM Cloud,Workday,Taleo,Icims,Service Now...]Advanced knowledge of Automation tools like Power Automate, Phyton, R, and other Programming Languages or tools necessary to implement Digitization/Automation and Simplification. Problem-solving, communication, and interpersonal ability to anticipate, identify, and solve critical problems Demonstrates keen attention to detail and rigorous data management practices Knowledge with large, complex data sets Knowledge in ETL:Access Excel templates & Tableau Prep, Database: MS Access, Reporting & Analytics: Tableau will be a Plus Management and business development of existing and new solutions Must maintain confidentiality of highly sensitive information Schedule: Full-time Req: 009GNG Show more Show less
Posted 2 days ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Engineer Location: Bangalore About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As a Data Engineer on our newly formed Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. You will play a critical role in the implementation of Data Warehousing and Data Lake solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a dedicated team, leveraging your skills in the data engineering area to build solutions and drive innovation forward. ”. What You’ll Contribute Perform hands-on analysis, technical design, solution architecture, prototyping, proofs-of-concept, development, unit and integration testing, debugging, documentation, deployment/migration, updates, maintenance, and support on Data Platform technologies. Design, develop, and maintain robust, scalable data pipelines for batch and real-time processing using modern tools like Apache Spark, Kafka, Airflow, or similar. Build efficient ETL/ELT workflows to ingest, clean, and transform structured and unstructured data from various sources into a well-organized data lake or warehouse. Manage and optimize cloud-based data infrastructure on platforms such as AWS (e.g., S3, Glue, Redshift, RDS) or Snowflake. Collaborate with cross-functional teams to understand data needs and deliver reliable datasets that support analytics, reporting, and machine learning use cases. Implement and monitor data quality, validation, and profiling processes to ensure the accuracy and reliability of downstream data. Design and enforce data models, schemas, and partitioning strategies that support performance and cost-efficiency. Develop and maintain data catalogs and documentation, ensuring data assets are discoverable and governed. Support DevOps/DataOps practices by automating deployments, tests, and monitoring for data pipelines using CI/CD tools. Proactively identify data-related issues and drive continuous improvements in pipeline reliability and scalability. Contribute to data security, privacy, and compliance efforts, implementing role-based access controls and encryption best practices. Design scalable architectures that support FICO’s analytics and decisioning solutions Partner with Data Science, Analytics, and DevOps teams to align architecture with business needs. What We’re Seeking 7+ years of hands-on experience as a Data Engineer working on production-grade systems. Proficiency in programming languages such as Python or Scala for data processing. Strong SQL skills, including complex joins, window functions, and query optimization techniques. Experience with cloud platforms such as AWS, GCP, or Azure, and relevant services (e.g., S3, Glue, BigQuery, Azure Data Lake). Familiarity with data orchestration tools like Airflow, Dagster, or Prefect. Hands-on experience with data warehousing technologies like Redshift, Snowflake, BigQuery, or Delta Lake. Understanding of stream processing frameworks such as Apache Kafka, Kinesis, or Flink is a plus. Knowledge of data modeling concepts (e.g., star schema, normalization, denormalization). Comfortable working in version-controlled environments using Git and managing workflows with GitHub Actions or similar tools. Strong analytical and problem-solving skills, with the ability to debug and resolve pipeline and performance issues. Excellent written and verbal communication skills, with an ability to collaborate across engineering, analytics, and business teams. Demonstrated technical curiosity and passion for learning, with the ability to quickly adapt to new technologies, development platforms, and programming languages as needed. Bachelor’s in computer science or related field Exposure to MLOps pipelines MLflow, Kubeflow, Feature Stores is a plus but not mandatory Engineers with certifications will be preferred Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer and community banking team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, equity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Proficient in coding in one or more languages Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Overall knowledge of the Software Development Life Cycle Experience on ETL Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security, AWS and Terraform Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred Qualifications, Capabilities, And Skills Familiarity with Python skills Good understanding of Data Lake concepts Familiarity with modern front-end technologies Exposure to cloud technologies ABOUT US Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer and community banking- Data technology, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Develops secure high-quality production code, and reviews and debugs code written by others Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture Leads communities of practice across Software Engineering to drive awareness and use of new and leading-edge technologies Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Experience in software engineering, including hands-on expertise in ETL/Data pipeline and data lake platforms like Teradata and Snowflake Hands-on practical experience delivering system design, application development, testing, and operational stability Proficiency in AWS services especially in Aurora Postgres RDS Proficiency in automation and continuous delivery methods Proficient in all aspects of the Software Development Life Cycle Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated proficiency in software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) In-depth knowledge of the financial services industry and their IT systems Preferred Qualifications, Capabilities, And Skills Experience in re-engineering and migrating on-premises data solutions to and for the cloud Experience in Infrastructure as Code (Terraform) for Cloud based data infrastructure Experience in building on emerging cloud serverless managed services, to minimize/eliminate physical/virtual server footprint Advanced in Java plus Python (nice to have) ABOUT US Show more Show less
Posted 2 days ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Data Engineer About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As a Data Engineer on our newly formed Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. You will play a critical role in the implementation of Data Warehousing and Data Lake solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a dedicated team, leveraging your skills in the data engineering area to build solutions and drive innovation forward. ”. What You’ll Contribute Perform hands-on analysis, technical design, solution architecture, prototyping, proofs-of-concept, development, unit and integration testing, debugging, documentation, deployment/migration, updates, maintenance, and support on Data Platform technologies. Design, develop, and maintain robust, scalable data pipelines for batch and real-time processing using modern tools like Apache Spark, Kafka, Airflow, or similar. Build efficient ETL/ELT workflows to ingest, clean, and transform structured and unstructured data from various sources into a well-organized data lake or warehouse. Manage and optimize cloud-based data infrastructure on platforms such as AWS (e.g., S3, Glue, Redshift, RDS) or Snowflake. Collaborate with cross-functional teams to understand data needs and deliver reliable datasets that support analytics, reporting, and machine learning use cases. Implement and monitor data quality, validation, and profiling processes to ensure the accuracy and reliability of downstream data. Design and enforce data models, schemas, and partitioning strategies that support performance and cost-efficiency. Develop and maintain data catalogs and documentation, ensuring data assets are discoverable and governed. Support DevOps/DataOps practices by automating deployments, tests, and monitoring for data pipelines using CI/CD tools. Proactively identify data-related issues and drive continuous improvements in pipeline reliability and scalability. Contribute to data security, privacy, and compliance efforts, implementing role-based access controls and encryption best practices. Design scalable architectures that support FICO’s analytics and decisioning solutions Partner with Data Science, Analytics, and DevOps teams to align architecture with business needs. What We’re Seeking 7+ years of hands-on experience as a Data Engineer working on production-grade systems. Proficiency in programming languages such as Python or Scala for data processing. Strong SQL skills, including complex joins, window functions, and query optimization techniques. Experience with cloud platforms such as AWS, GCP, or Azure, and relevant services (e.g., S3, Glue, BigQuery, Azure Data Lake). Familiarity with data orchestration tools like Airflow, Dagster, or Prefect. Hands-on experience with data warehousing technologies like Redshift, Snowflake, BigQuery, or Delta Lake. Understanding of stream processing frameworks such as Apache Kafka, Kinesis, or Flink is a plus. Knowledge of data modeling concepts (e.g., star schema, normalization, denormalization). Comfortable working in version-controlled environments using Git and managing workflows with GitHub Actions or similar tools. Strong analytical and problem-solving skills, with the ability to debug and resolve pipeline and performance issues. Excellent written and verbal communication skills, with an ability to collaborate across engineering, analytics, and business teams. Demonstrated technical curiosity and passion for learning, with the ability to quickly adapt to new technologies, development platforms, and programming languages as needed. Bachelor’s in computer science or related field Exposure to MLOps pipelines MLflow, Kubeflow, Feature Stores is a plus but not mandatory Engineers with certifications will be preferred Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 2 days ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Overview As the Value Cluster Lead for Stores (Techno Functional Stores) in our Retail domain, you will own the strategy, growth, and delivery excellence for a store cluster including analytics and operations solutions. You’ll bridge business and technology—leading pre sales, solution architecture, and program delivery—while building a high performing team of data engineers, analysts, and consultants. Primary Skill : SIM – Store Inventory Management & Business Process Retail & Distribution Location: Hyderabad Experience: 12+ years Key Responsibilities • Cluster Strategy & Roadmap: Define the stores analytics value cluster vision, identifying use cases across inventory optimization, footfall analysis, planogram compliance, and workforce scheduling using various technology solutions. • Techno Functional Leadership: Translate store operations and merchandising requirements into technical designs; oversee data modeling, pipeline orchestration (Spark Structured Streaming), and performance tuning. • Pre Sales & Proposals: Collaborate with Sales and Solutioning teams to craft RFP responses, architecture diagrams, TCO/ROI analyses, and executive presentations tailored to store centric use cases. • Delivery Oversight: Manage multiple retail store engagements—govern project health, risks, budgets, and timelines; ensure agile delivery, CI/CD for notebooks, test automation, and security best practices. • Capability Building: Recruit, mentor, and upskill a global team; establish Centers of Excellence for various store solutions. • Stakeholder Management: Act as the primary advisor to store operations leaders, CIOs, and merchandising heads; drive governance forums, steering committee updates, and change management. • Performance & Metrics: Define and track cluster KPIs; continuously optimize processes and resource allocation. Required Qualifications & Skills • 12–15 years of IT experience, with at least 5 years in techno functional leadership of project implementations in retail stores. • Strong understanding of store operations processes—POS transactions, inventory management, planogram compliance, footfall analytics, workforce scheduling. • Hands on experience with real time data ingestion (Kafka, Kinesis), ETL frameworks, and data modeling for high volume retail data. • Proven track record in pre sales: solution workshops, PoCs, business case development, and executive level demos. • Excellent leadership skills—able to build and manage distributed, cross functional teams and oversee P&L for a solution cluster. z • Outstanding communication and stakeholder management capabilities, with experience engaging C level and store operations executives Show more Show less
Posted 2 days ago
5.0 - 9.0 years
13 - 17 Lacs
Pune
Work from Office
Diacto is looking for a highly capable Data Architect with 5 to 9 years of experience to lead cloud data platform initiatives with a primary focus on Snowflake and Azure Data Hub. This individual will play a key role in defining the data architecture strategy, implementing robust data pipelines, and enabling enterprise-grade analytics solutions. This is an on-site role based in our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design and implement enterprise-level data architecture with a strong focus on Snowflake and Azure Data Hub Define standards and best practices for data ingestion, transformation, and storage Collaborate with cross-functional teams to develop scalable, secure, and high-performance data pipelines Lead Snowflake environment setup, configuration, performance tuning, and optimization Integrate Azure Data Services with Snowflake to support diverse business use cases Implement governance, metadata management, and security policies Mentor junior developers and data engineers on cloud data technologies and best practices Experience and Skills Required: 5?9 years of overall experience in data architecture or data engineering roles Strong, hands-on expertise in Snowflake, including design, development, and performance tuning Solid experience with Azure Data Hub and Azure Data Services (Data Lake, Synapse, etc.) Understanding of cloud data integration techniques and ELT/ETL frameworks Familiarity with data orchestration tools such as DBT, Airflow, or Azure Data Factory Proven ability to handle structured, semi-structured, and unstructured data Strong analytical, problem-solving, and communication skills Nice to Have: Certifications in Snowflake and/or Microsoft Azure Experience with CI/CD tools like GitHub for code versioning and deployment Familiarity with real-time or near-real-time data ingestio Why Join Diacto Technologies Work with a cutting-edge tech stack and cloud-native architectures Be part of a data-driven culture with opportunities for continuous learning Collaborate with industry experts and build transformative data solutions
Posted 2 days ago
5.0 - 9.0 years
14 - 17 Lacs
Pune
Work from Office
Diacto is seeking an experienced and highly skilled Data Architect to lead the design and development of scalable and efficient data solutions. The ideal candidate will have strong expertise in Azure Databricks, Snowflake (with DBT, GitHub, Airflow), and Google BigQuery. This is a full-time, on-site role based out of our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design, build, and optimize robust data architecture frameworks for large-scale enterprise solutions Architect and manage cloud-based data platforms using Azure Databricks, Snowflake, and BigQuery Define and implement best practices for data modeling, integration, governance, and security Collaborate with engineering and analytics teams to ensure data solutions meet business needs Lead development using tools such as DBT, Airflow, and GitHub for orchestration and version control Troubleshoot data issues and ensure system performance, reliability, and scalability Guide and mentor junior data engineers and developers
Posted 2 days ago
7.0 - 10.0 years
17 - 32 Lacs
Bengaluru
Remote
Remote - Expertise in Snowflake development &coding Azure Data Factory (ADF) Knowledge of CI/CD Proficient in ETL design across platforms like Denodo, Data Services &CPI-DS React, Node.js, &REST API integration Solid understanding of cloud platform
Posted 2 days ago
14.0 - 20.0 years
35 - 45 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Data Architect: • Design and implement enterprise data models, ensuring data integrity, consistency, and scalability Responsibilities: Design and implement enterprise data models, ensuring data integrity, consistency, and scalability. Analyse business needs and translate them into technical requirements for data storage, processing, and access. In-memory Cache: Optimizes query performance by storing frequently accessed data in memory. Query Engine: Processes and executes complex data queries efficiently. Business Rules Engine (BRE): Enforces data access control and compliance with business rules. Select and implement appropriate data management technologies, including databases, data warehouses. Collaborate with data engineers, developers, and analysts to ensure seamless integration of data across various systems. Monitor and optimize data infrastructure performance, identifying and resolving bottlenecks. • Stay up-to-date on emerging data technologies and trends, recommending and implementing solutions. Document data architecture and processes for clear communication and knowledge sharing, including the integration. Qualifications: • Proven experience in designing and implementing enterprise data models. • Expertise in SQL and relational databases (e.g., Oracle, MySQL, PostgreSQL). • Experience with cloud-based data platforms (e.g., AWS, Azure, GCP) is mandatory. • Working experience with ETL tools and data ingestion leveraging any real-time solutions (e.g., Kafka, streaming) is required • Strong understanding of data warehousing concepts and technologies. • Familiarity with data governance principles and best practices. • Excellent communication, collaboration, and problem-solving skills. • Ability to work independently and as part of a team. • Strong analytical and critical thinking skills. • Experience with data visualization & UI Development is a plus. • Bachelors degree in computer science, Information Technology, or a related fiel
Posted 2 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Requirements: 10+ experience as Data warehouse developer tester or ETL Tester Graduate degree preferred in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. Experience with testing tools like Tosca/Python DataCompy tool. (Or similar) Expert in understanding and implementing QA/Testing Lifecycle Expertise in DB-related testing and ETL testing Testing experience with reporting and BI tools Writing basic SQLs to check record counts, data truncation, invalid data types, NULLs and data integrity, data transformation, aggregate functions testing Expertise preferred in testing our current or a similar analytics stack Experience in transforming complex Business logic into SQL or PL/SQL queries Must be a self-starter with the ability to work on complex projects and analyzes business requirements independently Must be able to identify and document testing issues and quality risks, participate in defect remediation Willingness to follow Agile Testing practices and guidelines, including Defect Management Actively participate in all test management activities like assessment, test strategies and test plans Knowledge of Database objects and relational data model Experience in scripting Test automation experience is huge plus Understanding of ETL requirements (source-to-target mappings) Testing experience with different file types Experience with snowflake is a plus Show more Show less
Posted 2 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Senior Analyst – Data Engineering Data and Analytics team is a multi-disciplinary technology team delivering client projects and solutions across Data Management, Visualization, Business Analytics and Automation. The assignments cover a wide range of countries and industry sectors The opportunity We are looking for Senior Analyst - Data Engineering. The main purpose of the role is to support cloud and on-prem platform analytics and data engineering projects initiated across engagement teams. The role will primarily involve conceptualizing, designing, developing, deploying and maintaining complex technology solutions which help EY solve business problems for the clients. This role will work closely with technical architects, product and business subject matter experts (SMEs), back-end developers and other solution architects and is also onshore facing. This role will be instrumental in designing, developing, and evolving the modern data warehousing solutions and data integration build-outs using cutting edge tools and platforms for both on-prem and cloud architectures. In this role you will be coming up with design specifications, documentation, and development of data migration mappings and transformations for a modern Data Warehouse set up/data mart creation and define robust ETL processing to collect and scrub both structured and unstructured data providing self-serve capabilities (OLAP) in order to create impactful decision analytics reporting. Discipline : Information Management & Analysis Role Type : Data Architecture & Engineering A Data Architect & Engineer at EY: Uses agreed-upon methods, processes and technologies to design, build and operate scalable on-premises or cloud data architecture and modelling solutions that facilitate data storage, integration, management, validation and security, supporting the entire data asset lifecycle. Designs, builds and operates data integration solutions that optimize data flows by consolidating disparate data from multiple sources into a single solution. Works with other Information Management & Analysis professionals, the program team, management and stakeholders to design and build analytics solutions in a way that will deliver business value. Skills Cloud Computing, Business Requirements Definition, Analysis and Mapping, Data Modelling, Data Fabric, Data Integration, Data Quality, Database Management, Semantic Layer Effective Client Communication, Problem solving / critical thinking, Interest and passion for Technology, Analytical Thinking, Collaboration Your Key Responsibilities Evaluating and selecting data warehousing tools for business intelligence, data population, data management, metadata management and warehouse administration for both on-prem and cloud based engagements Strong working knowledge across the technology stack, including ETL, ELT, data analysis, metadata, data quality, audit and design Design, develop, and test in ETL tool environment (GUI/canvas steered tools to create workflows) Experience in design documentation (data mapping, technical specifications, production support, data dictionaries, test cases, etc.) Provides technical guidance to a team of data warehouse and business intelligence developers Coordinate with other technology users to design and implement matters of data governance, data harvesting, cloud implementation strategy, privacy, and security Adhere to ETL/Data Warehouse development Best Practices Responsible for Data orchestration, ingestion, ETL and reporting architecture for both on-prem and cloud (MS Azure/AWS/GCP) Assisting the team with performance tuning for ETL and database processes Skills And Attributes For Success Minimum of 4 years of total experience with Data warehousing/ Business Intelligence field Solid hands-on 3+ years of professional experience with creation and implementation of data warehouses on client engagements and helping create enhancements to a data warehouse Strong knowledge of data architecture for staging and reporting schemas, data models and cutover strategies using industry standard tools and technologies Architecture design and implementation experience with medium to complex on-prem to cloud migrations with any of the major cloud platforms (preferably AWS/Azure/GCP) Minimum 3+ years experience in Azure database offerings [Relational, NoSQL, Datawarehouse] 2+ years hands-on experience in various Azure services preferred – Azure Data Factory, Kafka, Azure Data Explorer, Storage, Azure Data Lake, Azure Synapse Analytics ,Azure Analysis Services & Databricks Minimum of 3 years of hands-on database design, modelling and integration experience with relational data sources, such as SQL Server databases, Oracle/MySQL, Azure SQL and Azure Synapse Knowledge and direct experience using business intelligence reporting tools (Power BI, Alteryx, OBIEE, Business Objects, Cognos, Tableau, MicroStrategy, SSAS Cubes etc.) Strong creative instincts related to data analysis and visualization. Curiosity to learn the business methodology, data model and user personas. Strong understanding of BI and DWH best practices, analysis, visualization, and latest trends. Experience with the software development life cycle (SDLC) and rules of product development, such as installation, upgrade and namespace management Solid thoughtfulness, technical and problem solving skills Excellent written and verbal communication skills To qualify for the role, you must have Bachelor’s or equivalent degree in computer science, or related field, required. Advanced degree or equivalent business experience preferred Fact steered and thoughtfulness with excellent attention to details Hands-on experience with data engineering tasks such as building analytical data records and experience manipulating and analyzing large volumes of data Relevant work experience of minimum 4 to 6 years in a big 4 or technology/ consulting set up Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of the clients Willingness to travel extensively and to work on client sites / practice office locations What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-prominent, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY SaT practices globally with prominent businesses across a range of industries What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success, as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
We are 3PILLAR GLOBAL We build breakthrough software products that power digital businesses. We are an innovative product development partner whose solutions drive rapid revenue, market share, and customer growth for industry leaders in Software and SaaS, Media and Publishing, Information Services, and Retail. Our key differentiator is our Product Mindset. Our development teams focus on building for outcomes and all of our team members around the globe are trained on the Product Mindset’s core values – Minimize Time to Value, Solve For Need, and Excel at Change. Our teams apply this mindset to build digital products that are customer-facing and revenue-generating. Our business-minded approach to agile development ensures that we align to client goals from the earliest conceptual stages through market launch and beyond. In 2024, 3Pillar Global India was named a “Great Place to Work” for the seventh year in a row based on how our employees feel about our company, collaborative culture, and work/life balance - come join our growing team Key Responsibilities Providing L2 & L3 Support around Tickets reported from Production Environments Monitor, analyze, and troubleshoot ETL/data pipelines across Data Lakes and distributed systems Conduct in-depth root cause analysis using SQL queries, system logs, and monitoring tools Support microservices-based applications running in Docker and Kubernetes environments Diagnose and resolve Linux server issues related to disk usage, memory, networking, and permissions Collaborate with DevOps and CloudOps teams on system scaling, performance optimization, and configuration changes Maintain and automate system health using cron jobs, shell scripts, and cloud-native tools Drive end-to-end incident resolution, create detailed RCA reports, and implement preventive measures Work with cross-functional teams to identify long-term solutions and enhance system stability Ensure SLA compliance, maintain accurate documentation, and continuously improve support processes Minimum Qualifications Must Have Minimum 5 years of experience in Technical/Application/Production Support in a fast-paced environment Strong hands-on experience with SQL, Linux, and cloud platforms (preferably AWS) Familiarity with monitoring and log management tools such as Datadog, Sumologic, Zabbix or similar platforms Practical experience with Docker and Kubernetes Good understanding of microservices architecture, APIs, and log debugging Strong analytical and problem-solving skills with keen attention to detail Excellent communication and collaboration skills to work across technical and non-technical teams Benefits A competitive annual salary based on experience and market demands Flexi-timings Work From Anywhere Medical insurance with the option to purchase a premium plan or HSA option for your entire family Regular Health check-up camps arranged by the company Recreational activities (Pool, TT, Wii, PS2) Business casual atmosphere Apply for this job Show more Show less
Posted 2 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Fusemachines Fusemachines is a 10+ year old AI company, dedicated to delivering state-of-the-art AI products and solutions to a diverse range of industries. Founded by Sameer Maskey, Ph.D., an Adjunct Associate Professor at Columbia University, our company is on a steadfast mission to democratize AI and harness the power of global AI talent from underserved communities. With a robust presence in four countries and a dedicated team of over 400 full-time employees, we are committed to fostering AI transformation journeys for businesses worldwide. At Fusemachines, we not only bridge the gap between AI advancement and its global impact but also strive to deliver the most advanced technology solutions to the world. About The Role This is a remote full-time contractual position , working in the Travel & Hospitality Industry , responsible for designing, building, testing, optimizing and maintaining the infrastructure and code required for data integration, storage, processing, pipelines and analytics (BI, visualization and Advanced Analytics) from ingestion to consumption, implementing data flow controls, and ensuring high data quality and accessibility for analytics and business intelligence purposes. This role requires a strong foundation in programming and a keen understanding of how to integrate and manage data effectively across various storage systems and technologies. We're looking for someone who can quickly ramp up, contribute right away and work independently as well as with junior team members with minimal oversight. We are looking for a skilled Sr. Data Engineer with a strong background in Python , SQL , Pyspark , Redshift, and AWS cloud-based large-scale data solutions with a passion for data quality, performance and cost optimization. The ideal candidate will develop in an Agile environment. This role is perfect for an individual passionate about leveraging data to drive insights, improve decision-making, and support the strategic goals of the organization through innovative data engineering solutions. Qualification / Skill Set Requirement: Must have a full-time Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field 5+ years of real-world data engineering development experience in AWS (certifications preferred). Strong expertise in Python, SQL, PySpark and AWS in an Agile environment, with a proven track record of building and optimizing data pipelines, architectures, and datasets, and proven experience in data storage, modelling, management, lake, warehousing, processing/transformation, integration, cleansing, validation and analytics A senior person who can understand requirements and design end-to-end solutions with minimal oversight Strong programming Skills in one or more languages such as Python, Scala, and proficient in writing efficient and optimized code for data integration, storage, processing and manipulation Strong knowledge SDLC tools and technologies, including project management software (Jira or similar), source code management (GitHub or similar), CI/CD system (GitHub actions, AWS CodeBuild or similar) and binary repository manager (AWS CodeArtifact or similar) Good understanding of Data Modelling and Database Design Principles. Being able to design and implement efficient database schemas that meet the requirements of the data architecture to support data solutions Strong SQL skills and experience working with complex data sets, Enterprise Data Warehouse and writing advanced SQL queries. Proficient with Relational Databases (RDS, MySQL, Postgres, or similar) and NonSQL Databases (Cassandra, MongoDB, Neo4j, etc.) Skilled in Data Integration from different sources such as APIs, databases, flat files, and event streaming Strong experience in implementing data pipelines and efficient ELT/ETL processes, batch and real-time, in AWS and using open source solutions, being able to develop custom integration solutions as needed, including Data Integration from different sources such as APIs (PoS integrations is a plus), ERP (Oracle and Allegra are a plus), databases, flat files, Apache Parquet, event streaming, including cleansing, transformation and validation of the data Strong experience with scalable and distributed Data Technologies such as Spark/PySpark, DBT and Kafka, to be able to handle large volumes of data Experience with stream-processing systems: Storm, Spark-Streaming, etc. is a plus Strong experience in designing and implementing Data Warehousing solutions in AWS with Redshift. Demonstrated experience in designing and implementing efficient ELT/ETL processes that extract data from source systems, transform it (DBT), and load it into the data warehouse Strong experience in Orchestration using Apache Airflow Expert in Cloud Computing in AWS, including deep knowledge of a variety of AWS services like Lambda, Kinesis, S3, Lake Formation, EC2, EMR, ECS/ECR, IAM, CloudWatch, etc Good understanding of Data Quality and Governance, including implementation of data quality checks and monitoring processes to ensure that data is accurate, complete, and consistent Good understanding of BI solutions, including Looker and LookML (Looker Modelling Language) Strong knowledge and hands-on experience of DevOps principles, tools and technologies (GitHub and AWS DevOps), including continuous integration, continuous delivery (CI/CD), infrastructure as code (IaC – Terraform), configuration management, automated testing, performance tuning and cost management and optimization Good Problem-Solving skills: being able to troubleshoot data processing pipelines and identify performance bottlenecks and other issues Possesses strong leadership skills with a willingness to lead, create Ideas, and be assertive Strong project management and organizational skills Excellent communication skills to collaborate with cross-functional teams, including business users, data architects, DevOps/DataOps/MLOps engineers, data analysts, data scientists, developers, and operations teams. Essential to convey complex technical concepts and insights to non-technical stakeholders effectively Ability to document processes, procedures, and deployment configurations Responsibilities: Design, implement, deploy, test and maintain highly scalable and efficient data architectures, defining and maintaining standards and best practices for data management independently with minimal guidance Ensuring the scalability, reliability, quality and performance of data systems Mentoring and guiding junior/mid-level data engineers Collaborating with Product, Engineering, Data Scientists and Analysts to understand data requirements and develop data solutions, including reusable components Evaluating and implementing new technologies and tools to improve data integration, data processing and analysis Design architecture, observability and testing strategies, and build reliable infrastructure and data pipelines Takes ownership of storage layer, data management tasks, including schema design, indexing, and performance tuning Swiftly address and resolve complex data engineering issues, incidents and resolve bottlenecks in SQL queries and database operations Conduct a Discovery on the existing Data Infrastructure and Proposed Architecture Evaluate and implement cutting-edge technologies and methodologies, and continue learning and expanding skills in data engineering and cloud platforms, to improve and modernize existing data systems Evaluate, design, and implement data governance solutions: cataloguing, lineage, quality and data governance frameworks that are suitable for a modern analytics solution, considering industry-standard best practices and patterns Define and document data engineering architectures, processes and data flows Assess best practices and design schemas that match business needs for delivering a modern analytics solution (descriptive, diagnostic, predictive, prescriptive) Be an active member of our Agile team, participating in all ceremonies and continuous improvement activities Fusemachines is an Equal opportunity employer, committed to diversity and inclusion. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristic protected by applicable federal, state, or local laws. Powered by JazzHR SC1hyFVwpp Show more Show less
Posted 2 days ago
2.0 - 5.0 years
4 - 8 Lacs
Kolkata
Hybrid
Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop
Posted 2 days ago
6.0 - 8.0 years
8 - 11 Lacs
Bengaluru
Work from Office
Requirements 6+ years of experience Proficiency in Java, Spring Boot, object databases, ElasticSearch/Solr, Practice in using AWS cloud, Docker & Kubernetes, REST APIs Experience in building scalable and high-performance systems Strong communication skills in English (B2+) Nice-to-have: Knowledge of Python, ETL experience, and big data solutions Responsibilities Maintenance of a large modern search platform Handling production issues and incidents Optimizing and maintaining existing code for performance and availability Ensuring high performance and availability of the system Engage in the Release Process Team Information Work within a SAFe, scrum kanban methodology, and agile approach Collaborative and friendly atmosphere Utilization of microservice architecture and extensive CI/CD automation Tools used: git, IntelliJ, Jira, Confluence, i3 by Tieto as search backend first round will be virtual and second round will be F2F. Skills: aws,spring boot,elasticsearch,rest apis,kubernetes,object databases,docker,java
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.
These cities are known for their thriving tech industries and often have a high demand for ETL professionals.
The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.
In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect
As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.
Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)
Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.
Here are 25 interview questions that you may encounter in ETL job interviews:
As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.