Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title :Data Engineer Key Skills :Python , ETL, Snowflake, Apache Airflow Job Locations : Pan India. Experience : 6-7 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: 6 to 10 years of experience in data engineering roles with a focus on building scalable data solutions. Proficiency in Python for ETL, data manipulation, and scripting. Hands-on experience with Snowflake or equivalent cloud-based data warehouses. Strong knowledge of orchestration tools such as Apache Airflow or similar. Expertise in implementing and managing messaging queues like Kafka , AWS SQS , or similar. Demonstrated ability to build and optimize data pipelines at scale, processing terabytes of data. Experience in data modeling, data warehousing, and database design. Proficiency in working with cloud platforms like AWS, Azure, or GCP. Strong understanding of CI/CD pipelines for data engineering workflows. Experience working in an Agile development environment , collaborating with cross-functional teams. Preferred Skills: Familiarity with other programming languages like Scala or Java for data engineering tasks. Knowledge of containerization and orchestration technologies (Docker, Kubernetes). Experience with stream processing frameworks like Apache Flink . Experience with Apache Iceberg for data lake optimization and management. Exposure to machine learning workflows and integration with data pipelines. Soft Skills: Strong problem-solving skills with a passion for solving complex data challenges. Excellent communication and collaboration skills to work with cross-functional teams. Ability to thrive in a fast-paced, innovative environment. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Client: Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. Job Title : Python with ETL Testing Key Skills : Python, ETL Testing, MYSQL, Sql, Oracle Job Locations : Hyderabad, Pune, Bangalore, Chennai Experience : 5+ Years. Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: 5 to 10 years of experience in relevant areas At least 3+ years of working knowledge in Python At least 2+ years of hands - on experience in data testing ETL Testing At least 2+ years of hands-on experience in Database like MYSQL,SQL Server, Oracle Must be well versed with Agile Methodology Hands-on Framework creation/ improvising test frameworks for automation. Experience in creating self-serve tools - Good experience working with Robot Framework. Should be able to work with git. Demonstrated knowledge of any of the CI/CD (gitlab, jenkins. ) Demonstrated knowledge of RDBMS, and SQL queries. Show more Show less
Posted 3 days ago
5.0 - 8.0 years
0 Lacs
India
On-site
Minimum of 5-8 years of experience in implementing Oracle EPBCS or similar EPM (Enterprise Performance Management) solutions. Experience in financial planning, budgeting, forecasting, or similar finance -related functions. Strong knowledge of EPBCS modules, especially Planning, Budgeting, and Forecasting features. Experience with Oracle Cloud solutions (ERP, HCM, SCM, etc.) is a plus. Technical Skills: Proficiency in Oracle EPBCS configuration, setup, and administration. Strong understanding of financial models, driver-based forecasting, and budgeting processes. Experience in integrating EPBCS with external systems using FDMEE, Data Management, or other ETL tools. Knowledge of reporting tools such as Smart View, Financial Reporting Studio, BI Publisher. Experience with scripting (e.g., Groovy, calculation scripts) to automate business rules and processes. Familiarity with Oracle SQL, relational databases, and data modeling is advantageous. Preferred Skills: Oracle Certification in EPBCS or Oracle Cloud EPM solutions is a plus. Knowledge of Agile project management practices. Experience with multi-currency, multi-company planning in a global organization. Show more Show less
Posted 3 days ago
15.0 years
0 Lacs
India
On-site
SAS Solution Designer We are seeking a highly experienced SAS Solution Designer to join our team in a solution engineering lead capacity. This role requires in-depth knowledge of SAS technologies, cloud-based platforms, and data solutions. The ideal candidate will be responsible for end-to-end solution design aligned with enterprise architecture standards and business objectives, providing technical leadership across squads and development teams. Mitra AI is currently looking for experienced SAS Solution Designers who are based in India and are open to relocating. This is a hybrid opportunity in Sydney, Australia. JOB SPECIFIC DUTIES & RESPONSIBILITIES Own and define the end-to-end solution architecture for data platforms, ensuring alignment with business objectives, enterprise standards, and architectural best practices. Design reliable, stable, and scalable SAS-based solutions that support long-term operational effectiveness. Lead solution engineers and Agile squads to ensure the delivery of high-quality, maintainable data solutions. Collaborate independently with business and technical stakeholders to understand requirements and translate them into comprehensive technical designs. Provide high-level estimates for proposed features and technical initiatives to support business planning and prioritization. Conduct and participate in solution governance forums to secure approval for data designs and strategies. Drive continuous improvement by identifying technical gaps and implementing best practices, emerging technologies, and enhanced processes. Facilitate work breakdown sessions and actively participate in Agile ceremonies such as sprint planning and backlog grooming. Ensure quality assurance through rigorous code reviews, test case validation, and enforcement of coding and documentation standards. Troubleshoot complex issues by performing root cause analysis, log reviews, and coordination with relevant teams for resolution. Provide mentoring and coaching to solution engineers and technical leads to support skills growth and consistency in solution delivery. REQUIRED COMPETENCIES AND SKILLS Deep expertise in SAS technologies and ecosystem. Strong proficiency in cloud-based technologies and data platforms (e.g., Azure, Hadoop, Teradata). Solid understanding of RDBMS, ETL/ELT tools (e.g., Informatica), and real-time data streaming. Ability to work across relational and NoSQL databases and integrate with various data and analytics tools. Familiarity with BI and reporting tools such as Tableau and Power BI. Experience guiding Agile delivery teams, supporting full-stack solution development through DevOps and CI/CD practices. Capability to define and implement secure, scalable, and performant data solutions. Strong knowledge of metadata management, reference data, and data lineage concepts. Ability to communicate effectively with both technical and non-technical stakeholders. Problem-solving mindset with attention to detail and an emphasis on delivering high-quality solutions. REQUIRED EXPERIENCE AND QUALIFICATIONS Minimum of 15+ years of experience in solution design and development roles, including leadership responsibilities. Strong exposure to SAS and enterprise data platforms in the financial services industry. Prior experience working within risk, compliance, or credit risk domains is highly desirable. Practical experience with Agile methodologies and DevOps principles. Bachelors or Masters degree in Computer Science, Engineering, Information Technology, or related field. Experience working in cross-functional teams with a focus on business alignment and technology delivery. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
India
On-site
THIS IS A LONG TERM CONTRACT POSITION WITH ONE OF THE LARGEST, GLOBAL, TECHNOLOGY LEADER . Our large, Fortune client is ranked as one of the best companies to work with, in the world. The client fosters progressive culture, creativity, and a Flexible work environment. They use cutting-edge technologies to keep themselves ahead of the curve. Diversity in all aspects is respected. Integrity, experience, honesty, people, humanity, and passion for excellence are some other adjectives that define this global technology leader. Key Responsibilities: Design and maintain robust and scalable data pipeline architecture . Assemble complex data sets that meet both functional and non-functional requirements. Implement internal process improvements, such as automation of manual tasks, optimization of data delivery, and re-designing infrastructure for scale. Build infrastructure to support efficient data extraction, transformation, and loading (ETL) using SQL, dbt , and AWS Big Data technologies . Develop analytics tools to provide actionable insights on employee experience , operational efficiency , and other business performance metrics . Collaborate with stakeholders to resolve data-related issues and support infrastructure needs. Create and manage processes for data transformation, metadata management , and workload orchestration . Stay up to date with emerging cloud technologies (AWS/Azure) and propose opportunities for integration. Partner with Data Scientists and Analysts to enhance data systems and ensure maximum usability. Minimum Qualifications: Bachelor's or Graduate degree in Computer Science , Information Systems , Statistics , or related quantitative field. 5+ years of experience in a Data Engineering role. Extensive hands-on experience with Snowflake and dbt (including advanced concepts like macros and Jinja templating). Proficient in SQL and familiar with various relational databases. Experience in Python and big data frameworks like PySpark . Hands-on experience with AWS services such as S3 , EC2 , Glue , Lambda , RDS , and Redshift . Experience working with APIs for data ingestion and integration. Proven track record in optimizing data pipelines and solving performance issues. Strong analytical and problem-solving skills, with experience conducting root cause analysis . Preferred Qualifications: Experience with AWS CloudFormation templates . Familiarity with Agile/SCRUM methodologies. Exposure to Power BI for dashboard development. Experience working with unstructured datasets and deriving business value from them. Previous experience with globally distributed teams in agile environments. Primary Skills (Must-have): Data Engineering-Data Quality-Data Analysis Show more Show less
Posted 3 days ago
7.0 years
0 Lacs
India
Remote
Job Title: Senior Data Engineer – Azure & API Development Location: Remote Experience Required: 7+ Years Job Summary: We are looking for an experienced Data Engineer with strong expertise in Azure cloud architecture , API development , and modern data engineering tools . The ideal candidate will have in-depth experience in building and maintaining scalable data pipelines and API integrations using Azure services like Azure Data Factory (ADF) , Databricks , Azure Functions , and Service Bus , along with infrastructure provisioning using Terraform . Key Responsibilities: Design and implement scalable, secure, and high-performance data solutions on Azure . Develop, deploy, and manage RESTful APIs to support data access and integration. Build and maintain ETL/ELT data pipelines using Azure Data Factory , Databricks , and Azure Functions . Integrate data workflows with Azure Service Bus and other messaging services. Define and implement cloud infrastructure using Terraform and Infrastructure-as-Code (IaC) best practices. Collaborate with stakeholders to understand data requirements and develop technical solutions. Ensure best practices for data governance , security , monitoring , and performance optimization . Work closely with DevOps and Data Architects to implement CI/CD pipelines and production-grade deployments. Must-Have Skills: 7+ years of professional experience in Data Engineering or related roles. Strong hands-on experience with Azure services , particularly: Azure Data Factory (ADF) Databricks (Spark-based processing) Azure Functions Azure Service Bus Proficient in API development (RESTful APIs using Python, .NET, or Node.js). Good command over SQL , Spark SQL , and data transformation techniques. Experience with Terraform for IaC and provisioning Azure resources. Excellent understanding of data architecture , cloud security , and governance models . Strong problem-solving skills and experience working in Agile environments. Preferred Skills: Familiarity with CI/CD tools like Azure DevOps, GitHub Actions, or Jenkins. Exposure to event-driven architecture and real-time data streaming. Knowledge of containerization (Docker/Kubernetes) is a plus. Experience in performance tuning and cost optimization in Azure environments. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
India
Remote
Job Title: Data Engineer Location: Remote Experience :5+Years Job Summary: We are seeking a highly skilled Data Engineer with strong experience in ETL development, data replication, and cloud data integration to join our remote team. The ideal candidate will be proficient in Talend , have hands-on experience with IBM Data Replicator and Qlik Replicate , and demonstrate deep knowledge of Snowflake architecture, CDC processes, and data transformation scripting. Key Responsibilities: Design, develop, and maintain robust ETL pipelines using Talend integrated with Snowflake . Implement and manage real-time data replication solutions using IBM Data Replicator and Qlik Replicate . Work with complex data source systems including DB2 (containerized and traditional) , Oracle , and Hadoop . Model and manage slowly changing dimensions ( Type 2 SCD ) in Snowflake. Optimize data pipelines for scalability, reliability, and performance. Design and implement Change Data Capture (CDC) strategies to support real-time and incremental data flows. Write efficient and maintainable code in SQL , Python , or Shell to support data transformations and automation. Collaborate with data architects, analysts, and other engineers to support data-driven initiatives. Required Skills & Qualifications: Strong proficiency in Talend ETL development and integration with Snowflake . Practical experience with IBM Data Replicator and Qlik Replicate . In-depth understanding of Snowflake architecture and Type 2 SCD data modeling. Familiarity with containerized environments and various data sources such as DB2 , Oracle , and Hadoop . Experience implementing CDC and real-time data replication patterns. Proficiency in SQL , Python , and Shell scripting . Excellent problem-solving and communication skills. Self-motivated and comfortable working independently in a fully remote environment. Preferred Qualifications: Snowflake certification or Talend certification. Experience working in an Agile or DevOps environment. Familiarity with data governance and data quality best practices. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Overview Team Geek Solutions is a forward-thinking company based in India, dedicated to delivering innovative IT services and solutions across various sectors. We pride ourselves on our culture of collaboration and continuous improvement, valuing creativity, integrity, and exceptional customer service. Our mission is to empower businesses through cutting-edge technology and tailored solutions that meet their unique needs. As we continue to grow, we seek talented individuals who resonate with our values and are passionate about making a difference in the tech landscape. Backend Technical Skillset Required: Java 8+ Spring Boot, Spring MVC, Spring Webservices, Spring Data Hibernate JasperReports Oracle SQL, PL/SQL Development Pentaho Kettle (ETL tool) Basic Linux scripting and troubleshooting GIT (version control) Strong grasp of Design Patterns Frontend Angular 8+ React 16+ (Good to have) Angular Material Bootstrap 4 HTML5, CSS3, SCSS JavaScript & TypeScript Job Responsibilities Design, develop, and maintain web applications using Java and Angular frameworks Develop scalable backend services using Spring Boot and integrate with frontend Collaborate with cross-functional teams for end-to-end delivery Write clean, testable, and efficient code following best practices Perform code reviews and contribute to team knowledge sharing Troubleshoot and resolve production issues as needed Use version control systems like Git for collaborative development Experience working with Pentaho Kettle or similar ETL tools Nice To Have Exposure to basic DevOps and deployment practices Familiarity with Agile/Scrum methodologies Skills: full stack,spring boot,sql,angular 8+,spring webservices,javascript,oracle,jasperreports,springboot,pl/sql development,devops,java 8+,kettle,jasper reports,typescript,basic linux scripting,angular,angular material,design patterns,oracle sql,spring data,pentaho kettle,java,linux,html5,spring mvc,full stack development,hibernate,scss,css3,pentaho,jasper,react 16+,git,etl,bootstrap 4 Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across various platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development. Responsibilities Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management. Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting. Work with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts. Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time. Documentation : Document ticket resolutions, testing protocols, and data validation processes. Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers. Ticket Management: Monitor the Jira ticket queue and respond to tickets as they are raised. Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them. Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues. Troubleshooting and Support: Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics. Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance. Desired Skills & Requirements Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit. Our ideal candidate possesses the following attributes and qualifications: Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments. Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions. Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management. Hands-on experience with PySpark for data processing and automation. Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within the customers secure environments. Some experience with Azure DevOps CI/CD IaC and release pipelines. Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills. Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement. Experience with Data Engineering in Microsoft Fabric Experience with Delta Lake and Azure data engineering concepts (e.g., ADLS, ADF, Synapse, AAD, Databricks). Certifications in Azure Data Engineering. Why Join Us? Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless. Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success. Enjoy the flexibility to work from anywhere Work-life balance that suits your lifestyle. Competitive salary and comprehensive benefits package. Career growth and professional development opportunities. A collaborative and inclusive work culture. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
At Aramya, we’re redefining fashion for India’s underserved Gen X/Y women, offering size-inclusive, comfortable, and stylish ethnic wear at affordable prices. Launched in 2024, we’ve already achieved ₹40 Cr in revenue in our first year, driven by a unique blend of data-driven design, in-house manufacturing, and a proprietary supply chain. Today, with an ARR of ₹100 Cr, we’re scaling rapidly with ambitious growth plans for the future. Our vision is bold to build the most loved fashion and lifestyle brands across the world while empowering individuals to express themselves effortlessly. Backed by marquee investors like Accel and Z47, we’re on a mission to make high-quality ethnic wear accessible to every woman. We’ve built a community of loyal customers who love our weekly design launches, impeccable quality, and value-for-money offerings. With a fast-moving team driven by creativity, technology, and customer obsession, Aramya is more than a fashion brand—it’s a movement to celebrate every woman’s unique journey. We’re looking for a passionate Data Engineer with a strong foundation. The ideal candidate should have a solid understanding of D2C or e-commerce platforms and be able to work across the stack to build high-performing, user-centric digital experiences. Key Responsibilities Design, build, and maintain scalable ETL/ELT pipelines using tools like Apache Airflow, Databricks , and Spark . Own and manage data lakes/warehouses on AWS Redshift (or Snowflake/BigQuery). Optimize SQL queries and data models for analytics, performance, and reliability. Develop and maintain backend APIs using Python (FastAPI/Django/Flask) or Node.js . Integrate external data sources (APIs, SFTP, third-party connectors) and ensure data quality & validation. Implement monitoring, logging, and alerting for data pipeline health. Collaborate with stakeholders to gather requirements and define data contracts. Maintain infrastructure-as-code (Terraform/CDK) for data workflows and services. Must-Have Skills Strong in SQL and data modeling (OLTP and OLAP). Solid programming experience in Python , preferably for both ETL and backend. Hands-on experience with Databricks , Redshift , or Spark . Experience with building and managing ETL pipelines using tools like Airflow , dbt , or similar. Deep understanding of REST APIs , microservices architecture, and backend design patterns. Familiarity with Docker , Git, CI/CD pipelines. Good grasp of cloud platforms (preferably AWS ) and services like S3, Lambda, ECS/Fargate, CloudWatch. Nice-to-Have Skills Exposure to streaming platforms like Kafka, Kinesis, or Flink. Experience with Snowflake , BigQuery , or Delta Lake . Proficient in data governance , security best practices, and PII handling. Familiarity with GraphQL , gRPC , or event-driven systems. Knowledge of data observability tools (Monte Carlo, Great Expectations, Datafold). Experience working in a D2C/eCommerce or analytics-heavy product environment. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for an experienced Azure Data Warehouse Engineer to join our Chennai-based data and analytics team. You will be responsible for designing and developing a unified data platform that integrates critical business systems and enables self-service analytics across departments. This is a strategic role aimed at building a single source of truth for customer and transaction data across the organization. Key Goals of the Project: Build a unified data warehouse on Azure , integrating data from Salesforce and QuickBooks Create department-specific flat views for business reporting Enable self-service dashboards using Tableau Deliver a centralized, accurate, and reliable data source for customer and transaction insights Proposed Technology Stack: Cloud Platform: Azure (existing environment) Data Warehouse: Azure Synapse or Snowflake (to be finalized) ETL / Orchestration: Azure Data Factory, Python, Spark Reporting Tools: Tableau Key Responsibilities: Design and implement scalable data models and architecture on Azure Synapse or Snowflake Develop ETL pipelines using Azure Data Factory, Python , and Spark to ingest and transform data from Salesforce , QuickBooks , and other sources Create robust, reusable data views for different business departments Collaborate with business analysts and stakeholders to deliver reliable datasets for Tableau dashboards Ensure data accuracy, consistency, security, and governance across the platform Optimize performance of large-scale data processing jobs Maintain documentation, data catalogs, and version control for data pipelines Qualifications & Skills: 5+ years of experience in data warehousing, ETL development, and cloud-based analytics Strong expertise in Azure Data Factory , Azure Synapse , or Snowflake Experience with Salesforce and QuickBooks data integration is highly desirable Proficiency in SQL , Python , and distributed data processing (e.g., PySpark ) Hands-on experience with Tableau or similar BI tools Understanding of data modeling (dimensional/star schema), warehousing best practices, and performance tuning Familiarity with cloud security, access management, and data governance in Azure Excellent problem-solving, communication, and collaboration skills Nice to Have: Experience with DevOps and CI/CD practices in a data engineering context Familiarity with Data Vault or modern data stack architectures Knowledge of API integrations and data sync between cloud systems Show more Show less
Posted 3 days ago
10.0 - 15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position- Cloudera Data Engineer Location- Chennai Notice period- 0-30 Days/ Immediately Joiners Experience-10 to 15 years Cloudera Data Engineer will likely focus on designing, building, and maintaining scalable data pipelines and platforms within the Cloudera Hadoop ecosystem. Key skills include expertise in data warehousing, ETL processes, and strong programming abilities in languages like Python and SQL. They'll also need to be proficient in Cloudera tools, including Spark, Hive, and potentially Airflow for orchestration. Thank you Show more Show less
Posted 3 days ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Dear Candidate, Greetings from LTIMindtree !!! Your Profile got shortlisted for Technical Round of Interview. I hope you have a great day, Skills - Data Analyst Location – Hyderabad , Pune, Mumbai, Kolkata, Bangalore, Chennai Notice : Immediate to 15 days PFB JD FYR. 5 to 8 years experience in information technology Business Analysis Data Coverage analysis and Identify data gaps understanding of product and channel hierarchies data transformation and aggregations Strong functional and technical knowledge on Retail Industry SalesOnlineOffline CRM Good understanding of ETL SQl Server and BI tools An ability to align influence stakeholders and build working relationships A confident and articulate communicator capable of inspiring strong collaboration Good knowledge of IT systems and processes Strong analytical problem solving and project management skills Attention to detail and complex processes Business engagement and stakeholder management Partner with business team to identify needs and analytics opportunities Supervise and guide vendor partners to develop and maintain a data warehouse platform and BI reporting Work with management to prioritize business and information needs Mining data from sources then reorganizing the data in target format Performing data analyses between LOreal database and from business requirements Interpret data analyze results using statistical techniques and provide ongoing reports Find out the mapping and gaps Provide transformation logic Research and verify the logic and relationship between dataset and KPIs Filter and clean data by reviewing reports and performance indicators to locate and correct code problems If Interested , Kindly share your updated resume & fill below link : https://forms.office.com/r/EdFKPCNVaA We shall get back to you soon regarding the further steps. Show more Show less
Posted 3 days ago
10.0 years
0 Lacs
Greater Kolkata Area
On-site
We are looking for a Senior Data Lead to lead enterprise-level data modernization and innovation. In this highly strategic role, you will design scalable, secure, and future-ready data architectures, modernize legacy systems, and provide trusted technical leadership across both technology and business teams. This is a unique opportunity to make a company-wide impact by influencing data strategy and enabling smarter, faster decision-making through data. Key Responsibilities Architect & Design: Lead the development of robust, scalable data models, data management systems, and integration frameworks to ensure enterprise-wide data accuracy, consistency, and security. Domain Expertise: Act as a subject matter expert across key business functions such as Supply Chain, Product Engineering, Sales & Marketing, Manufacturing, Finance, and Legal. Modernization Leadership: Drive the transformation of legacy systems and manage end-to-end cloud migrations with minimal business disruption. Collaboration: Partner with data engineers, scientists, analysts, and IT leaders to build high-performance, scalable data pipelines and transformation solutions. Governance & Compliance: Establish and maintain data governance frameworks including metadata repositories, data dictionaries, and data lineage documentation. Strategic Advisory: Provide guidance on data architecture best practices, technology selection, and roadmap alignment to senior leadership and cross-functional teams. Mentorship: Serve as a mentor and thought leader to junior data professionals, fostering a culture of innovation, knowledge sharing, and technical excellence. Innovation & Trends: Stay abreast of emerging technologies in cloud, data platforms, and AI/ML to identify and implement innovative solutions. Communication: Translate complex technical concepts into clear, actionable insights for technical and non-technical audiences alike. Required Qualifications 10+ years of experience in data architecture, engineering, or enterprise data management roles. Demonstrated success leading large-scale data initiatives in life sciences or other highly regulated industries. Deep expertise in modern data architecture paradigms such as Data Lakehouse, Data Mesh, or Data Fabric. Strong hands-on experience with cloud platforms like AWS, Azure, or Google Cloud Platform (GCP). Proficiency in data modeling, ETL/ELT frameworks, and enterprise integration patterns. Deep understanding of data governance, metadata management, master data management (MDM), and data quality practices. Experience with tools and platforms including but not limited to: Data Integration: Informatica, Talend Data Governance: Collibra Modeling/Transformation: dbt Cloud Platforms: Snowflake, Databricks Excellent problem-solving skills with the ability to translate business requirements into scalable data solutions. Exceptional communication skills and experience engaging with both executive stakeholders and engineering teams. Preferred Qualifications (Nice to Have) Experience with AI/ML data pipelines or real-time streaming architectures. Certifications in cloud technologies (e.g., AWS Certified Solutions Architect, Azure Data Engineer). Familiarity with regulatory frameworks such as GxP, HIPAA, or GDPR. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Work Level : Individual Core : Responsible Leadership : Team Alignment Industry Type : Information Technology Function : Database Administrator Key Skills : mSQL,SQL Writing,PLSQL Education : Graduate Note: This is a requirement for one of the Workassist Hiring Partner. Responsibilities Write, optimize, and maintain SQL queries, stored procedures, and functions. This is a Remote Position. Assist in designing and managing relational databases. Perform data extraction, transformation, and loading (ETL) tasks. Ensure database integrity, security, and performance. Work with developers to integrate databases into applications. Support data analysis and reporting by writing complex queries. Document database structures, processes, and best practices. Requirements Currently pursuing or recently completed a degree in Computer Science, Information Technology, or a related field. Strong understanding of SQL and relational database concepts. Experience with databases such as MySQL, PostgreSQL, SQL Server, or Oracle. Ability to write efficient and optimized SQL queries. Basic knowledge of indexing, stored procedures, and triggers. Understanding of database normalization and design principles. Good analytical and problem-solving skills. Ability to work independently and in a team in a remote setting. Company Description Workassist is an online recruitment and employment solution platform based in Lucknow, India. We provide relevant profiles to employers and connect job seekers with the best opportunities across various industries. With a network of over 10,000+ recruiters, we help employers recruit talented individuals from sectors such as Banking & Finance, Consulting, Sales & Marketing, HR, IT, Operations, and Legal. We have adapted to the new normal and strive to provide a seamless job search experience for job seekers worldwide. Our goal is to enhance the job seeking experience by leveraging technology and matching job seekers with the right employers. For a seamless job search experience, visit our website: https://bit.ly/3QBfBU2 (Note: There are many more opportunities apart from this on the portal. Depending on the skills, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 days ago
10.0 - 12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Summary: IT - Lead Architect/Associate Principle – Azure Lake - D&IT DATA This profile will lead a team of architects and engineers to focus on Strategic Azure architecture and AI projects. Job Responsibility: Strategic Data Architecture and Roadmap: Develop and maintain the company’s data architecture strategy aligned with business objectives. Lead design and/or architecture validation reviews with all stakeholders, assess projects against architectural principles and target solutions, organize and lead architecture committees. Select new solutions that meet business needs, aligned with existing recommendations and solutions, and broadly with IS strategy. Model the company’s information systems and business processes. Define a clear roadmap to modernize and optimize data management practices and technologies. Emerging Technologies and Innovation: Drive the adoption of new technologies (AL/ML) and assess their impact on the organization’s data strategy. Conduct technological watch in both company activity domains and IT technologies, promoting innovative solutions adapted to the company. Define principles, standards, and tools for system modeling and process management. Platform Design and Implementation: Architect scalable data flows, storage solutions, and analytics platforms in cloud and hybrid environments. Ensure secure, high-performing, and cost-effective data solutions. Data Governance and Quality: Establish data governance frameworks ensuring data accuracy, availability, and security. Promote and enforce best practices for data quality management. Ensure compliance with enterprise architecture standards and principles. Technical Leadership: Act as a technical advisor on complex data-related projects and proof of concepts. Stakeholder Collaboration: Collaborate with business stakeholders to understand data needs and translate them into architectural solutions. Work with relevant stakeholders in defining project scope, planning development, and validating milestones throughout project execution. Exposure to a wide range of technologies related to Datalakes SQL, SYNAPSE, Databricks, PowerBI, Fabric, Python Tools: Visual Studio & TFS, GIT. Database: SQL Server. NoSQL Methodologies: Agile (SCRUM). SAP BW / SAC Required Skill: Expert in Azure, Databricks and Synapse Proven experience leading technical teams and strategic projects. Deep knowledge of cloud data platforms (Microsoft Azure, Fabric, Databricks, or AWS). Proven experience in designing and implementing AI solutions within data architectures. Understanding of SAP-based technologies (SAP BW, SAP DataSphere, HANA, S/4, ECC). Experience with analytics, visualization, reporting, and self-service tools (Power BI, Tableau, SAP Analytics Cloud). Expert understanding of data modeling, ETL/ELT technologies, and big data. Experience with relational and NoSQL databases. Deep knowledge of data security and compliance best practices Strong experience in Solution Architecture. Proven ability to lead AI/ML projects from conception to deployment. Familiarity with data mesh and data fabric architectural approaches. Qualification and Experience: Experience – 10-12 years Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field. Experience in data architecture, with at least 5 years in a leadership role. Experience with AI/ML projects. Certifications in data architecture or cloud technologies, project management. 5-year experience in AI model design & deployment Excellent communication and presentation skills for both technical and non-technical audiences. Strong problem-solving skills, stakeholder management, and the ability to navigate complexity. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
5 years of experience as a Data Engineer or a similar role. Bachelor’s degree in Computer Science, Data Engineering, Information Technology, or a related field. Strong knowledge of data engineering tools and technologies (e.g. SQL, ETL, data warehousing). Experience with data pipeline frameworks and data processing platforms (e.g. Apache Kafka, Apache Spark). Proficiency in programming languages such as Python, Java, or Scala. Experience with cloud platforms (e.g. AWS, Google Cloud Platform, Azure). Knowledge of data modeling, database design, and data governance. Mongo DB Is Must Show more Show less
Posted 3 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company: Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. Job Title: Tosca Automation Location: Hyderabad Experience: 6 + Years Job Type : Contract Notice Period: Immediate . Mandatory Skills: Tosca Automation , CI/CD, ETL Job Description: A strong Tosca Test automation profile is urgently needed. Minimum 6+ years of experience. Strong Testing experience Experience in integrating TOSCA with CI/CD pipelines Knowledge of other test automation tools Selenium/Python knowledge Strong SQL knowledge ; ETL/DWH Testing Experience in developing automation framework from scratch Ability to drive work collaboratively within cross-functional teams Agile Methodologies Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP BusinessObjects Data Services Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the solutions align with business objectives. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking opportunities for improvement and innovation in application development. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP BusinessObjects Data Services. - Strong understanding of data integration and transformation processes. - Experience with ETL (Extract, Transform, Load) methodologies. - Familiarity with database management systems and SQL. - Ability to troubleshoot and resolve application issues effectively. Additional Information: - The candidate should have minimum 3 years of experience in SAP BusinessObjects Data Services. - This position is based at our Hyderabad office. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 3 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Job About Client Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title : ETL Developer Key Skills : - DataStage (ETL), R language (Must have), Linux scripting, SQL, Control-M, Locations : Hyderabad,Pune. Experience : 6-8 years Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contract to Hire Notice Period : Immediate - 10 Days. Job Description: The candidate should have 5-7 yrs and above exp in ETL development. Understanding of data modeling concepts. Passionate about sophisticated data structures and problem solutions. Quickly learn new data tools and ideas Proficient in skills - DataStage (ETL), R language (Must have), Linux scripting, SQL, Control-M, GCP knowledge would be an added advantage. The candidate should be well aware of Agile ways of working Knowledge of different SQL/NoSQL data storage techniques and Big Data technologies. Show more Show less
Posted 3 days ago
50.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details:- location : Hyderabad Mode Of Work : Hybrid Notice Period : Immediate Joiners Experience : 6-8 yrs Type Of Hire : Contract to Hire JOB DESCRIPTION: • Understanding of Spark core concepts like RDD’s, DataFrames, DataSets, SparkSQL and Spark Streaming. • Experience with Spark optimization techniques. • Deep knowledge of Delta Lake features like time travel, schema evolution, data partitioning. • Ability to design and implement data pipelines using Spark and Delta Lake as the data storage layer. • Proficiency in Python/Scala/Java for Spark development and integrate with ETL process. • Knowledge of data ingestion techniques from various sources (flat files, CSV, API, database) • Understanding of data quality best practices and data validation techniques. Other Skills: • Understanding of data warehouse concepts, data modelling techniques. • Expertise in Git for code management. • Familiarity with CI/CD pipelines and containerization technologies. • Nice to have experience using data integration tools like DataStage/Prophecy/Informatica/Ab Initio" Show more Show less
Posted 3 days ago
10.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction We are looking for candidates with 10 +years of experience in data architect role. Responsibilities include: Design and implement scalable, secure, and cost-effective data architectures using GCP Lead the design and development of data pipelines with BigQuery, Dataflow, and Cloud Storage Architect and implement data lakes, data warehouses, and real-time data processing solutions on GCP Ensure data architecture aligns with business goals, governance, and compliance requirements Collaborate with stakeholders to define data strategy and roadmap Design and deploy BigQuery solutions for optimized performance and cost efficiency Build and maintain ETL/ELT pipelines for large-scale data processing Leverage Cloud Pub/Sub, Dataflow, and Cloud Functions for real-time data integration Implement best practices for data security, privacy, and compliance in cloud environments Integrate machine learning workflows with data pipelines and analytics tools Define data governance frameworks and manage data lineage Lead data modeling efforts to ensure consistency, accuracy, and performance across systems Optimize cloud infrastructure for scalability, performance, and reliability Mentor junior team members and ensure adherence to architectural standards Collaborate with DevOps teams to implement Infrastructure as Code (Terraform, Cloud Deployment Manager). Ensure high availability and disaster recovery solutions are built into data systems Conduct technical reviews, audits, and performance tuning for data solutions Design solutions for multi-region and multi-cloud data architecture Stay updated on emerging technologies and trends in data engineering and GCP Drive innovation in data architecture, recommending new tools and services on GCP Certifications : Google Cloud Certification is Preferred Primary Skills : 7+ years of experience in data architecture, with at least 3 years in GCP environments Expertise in BigQuery, Cloud Dataflow, Cloud Pub/Sub, Cloud Storage, and related GCP services Strong experience in data warehousing, data lakes, and real-time data pipelines Proficiency in SQL, Python, or other data processing languages Experience with cloud security, data governance, and compliance frameworks Strong problem-solving skills and ability to architect solutions for complex data environments Google Cloud Certification (Professional Data Engineer, Professional Cloud Architect) preferred Leadership experience and ability to mentor technical teams Excellent communication and collaboration skills Show more Show less
Posted 3 days ago
10.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Introduction We are looking for candidates with 10 +years of experience in data architect role. Responsibilities include: Design and implement scalable, secure, and cost-effective data architectures using GCP Lead the design and development of data pipelines with BigQuery, Dataflow, and Cloud Storage Architect and implement data lakes, data warehouses, and real-time data processing solutions on GCP Ensure data architecture aligns with business goals, governance, and compliance requirements Collaborate with stakeholders to define data strategy and roadmap Design and deploy BigQuery solutions for optimized performance and cost efficiency Build and maintain ETL/ELT pipelines for large-scale data processing Leverage Cloud Pub/Sub, Dataflow, and Cloud Functions for real-time data integration Implement best practices for data security, privacy, and compliance in cloud environments Integrate machine learning workflows with data pipelines and analytics tools Define data governance frameworks and manage data lineage Lead data modeling efforts to ensure consistency, accuracy, and performance across systems Optimize cloud infrastructure for scalability, performance, and reliability Mentor junior team members and ensure adherence to architectural standards Collaborate with DevOps teams to implement Infrastructure as Code (Terraform, Cloud Deployment Manager). Ensure high availability and disaster recovery solutions are built into data systems Conduct technical reviews, audits, and performance tuning for data solutions Design solutions for multi-region and multi-cloud data architecture Stay updated on emerging technologies and trends in data engineering and GCP Drive innovation in data architecture, recommending new tools and services on GCP Certifications : Google Cloud Certification is Preferred Primary Skills : 7+ years of experience in data architecture, with at least 3 years in GCP environments Expertise in BigQuery, Cloud Dataflow, Cloud Pub/Sub, Cloud Storage, and related GCP services Strong experience in data warehousing, data lakes, and real-time data pipelines Proficiency in SQL, Python, or other data processing languages Experience with cloud security, data governance, and compliance frameworks Strong problem-solving skills and ability to architect solutions for complex data environments Google Cloud Certification (Professional Data Engineer, Professional Cloud Architect) preferred Leadership experience and ability to mentor technical teams Excellent communication and collaboration skills Show more Show less
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title : ETL Deeveloper Key Skills : DataStage (ETL), R language (Must have), Linux scripting, SQL, Control-M, GCP knowledge Job Locations : Pune Experience : 4 - 6 Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contract Notice Period : Immediate - 10 Days Payroll : people prime Worldwide Job description: The candidate should have 8 yrs and above exp in ETL development. Understanding of data modeling concepts. Passionate about sophisticated data structures and problem solutions. Quickly learn new data tools and ideas Proficient in skills - DataStage (ETL), R language (Must have), Linux scripting, SQL, Control-M, GCP knowledge would be an added advantage. The candidate should be well aware of Agile ways of working Knowledge of different SQL/NoSQL data storage techniques and Big Data technologies Show more Show less
Posted 3 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
Remote
K&K Talents is an international recruiting agency that has been providing technical resources globally since 1993. This position is with one of our clients in India, who is actively hiring candidates to expand their teams. Title: Technical Analyst- BI Developer/Report Writer Location: Gurugram, India (Remote first six months after that onsite) Employment Type: Full-time Permanent/C2H Must Skills: Power BI, dundas bi, tableau, cognos Required Skills: 7+ years of experience as a Report Writer, BI Developer, or SQL Developer. Advanced proficiency in SQL (MySQL, PostgreSQL, or similar RDBMS). Experience developing and maintaining reports using BI tools such as Dundas BI, Power BI, Tableau, or Cognos. Strong knowledge of data modeling techniques and relational database design. Familiarity with ETL processes, data warehousing concepts, and performance tuning. Exposure to cloud platforms (Azure, AWS) is a plus. Experience working in Agile/Scrum environments. Strong analytical and problem-solving skills Excellent communication skills and ability to work in a team environment. Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field. Show more Show less
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.
These cities are known for their thriving tech industries and often have a high demand for ETL professionals.
The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.
In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect
As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.
Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)
Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.
Here are 25 interview questions that you may encounter in ETL job interviews:
As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.