Home
Jobs

13457 Etl Jobs - Page 48

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

It was nice visiting your profile in portal, One of our top MNC client has critical job position onArtificial Engineer (AI) for Pune Location Please Apply relevant Profiles Candidates Required Skill:Artificial Engineer (AI) Years of Experience:5 to 12 Years, CTC: Can be discussed Notice Period: Immediate Joiners or 15-20 Days or can be discussed Work Location: Pune Interview: Online Candidates should haveAI Experience Job Description About the Role: In this role, you will be at the forefront of developing and deploying cutting-edge AI solutions that directly impact our business. You will leverage your expertise in data and machine learning engineering, natural language processing (NLP), computer vision, and agentic AI, to build scalable and robust systems that drive innovation and efficiency. You will be responsible for the entire AI lifecycle, from data acquisition and preprocessing to model development, deployment, and monitoring. Responsibilities Data and ML Engineering: Design and implement robust data pipelines to extract, transform, and load (ETL) data from diverse structured and unstructured sources (e.g., databases, APIs, text documents, images, videos). Develop and maintain scalable data storage and processing solutions. Perform comprehensive data cleaning, validation, and feature engineering to prepare data for machine learning models. Build and deploy machine learning models for a variety of business applications, including but not limited to process optimization and enterprise efficiency. Web Scraping and Document Processing: Implement web scraping solutions and utilize document processing libraries to extract and process data from various sources. NLP and Computer Vision: Develop and implement NLP models for tasks such as text classification, sentiment analysis, entity recognition, and language generation. Implement computer vision models for image classification, object detection, and image segmentation. Agentic AI Development Design and develop highly scalable production-ready code for agentic AI systems. Implement and integrate agentic AI solutions into existing workflows to automate complex tasks and improve decision-making. Develop and maintain agentic systems for data wrangling, supply chain optimization, and enterprise efficiency projects. Work with LLMs, and other related technologies to create agentic workflows. Integrate NLP and Computer Vision capabilities into agentic workflows to enhance their ability to understand and interact with diverse data sources. Model Development And Deployment Design and develop machine learning models and algorithms to solve simplified business problems. Evaluate and optimize model performance through rigorous testing and experimentation. Deploy and monitor machine learning models in production environments. Implement best practices for model versioning, reproducibility, and explainability. Optimize and deploy NLP and computer vision models for real-time inference. Communication And Collaboration Clearly articulate complex technical concepts to both technical and non-technical audiences. Demonstrate live coding proficiency and effectively explain your code and design decisions. Collaborate with cross-functional teams, including product managers, data scientists, and software engineers. Document code, models, and processes for knowledge sharing and maintainability. Qualifications Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, Computer Vision, or a related field. Proven experience in developing and deploying machine learning models, NLP models, and computer vision models, and data pipelines. Strong programming skills in Python and experience with relevant libraries (e.g., TensorFlow, PyTorch, scikit-learn, pandas, NumPy, Hugging Face Transformers, OpenCV, Pillow). Experience with cloud computing platforms (e.g., AWS, GCP, Azure). Experience with database technologies (e.g., SQL, NoSQL). Experience with agentic AI development and LLMs is highly desirable. Excellent problem-solving and analytical skills. Product Engineering background Ability to demonstrate live coding proficiency. Experience in productionizing ML models. Preferred Qualifications Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes). Experience with MLOps practices and tools. Experience with building RAG systems. Experience with deploying and optimizing models for edge devices. Experience with video processing and analysis. This job is provided by Shine.com Show more Show less

Posted 4 days ago

Apply

5.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

It was nice visiting your profile in portal, One of our top MNC client has critical job position onArtificial Engineer (AI) for Pune Location Please Apply relevant Profiles Candidates Required Skill:Artificial Engineer (AI) Years of Experience:5 to 12 Years, CTC: Can be discussed Notice Period: Immediate Joiners or 15-20 Days or can be discussed Work Location: Pune Interview: Online Candidates should haveAI Experience Job Description About the Role: In this role, you will be at the forefront of developing and deploying cutting-edge AI solutions that directly impact our business. You will leverage your expertise in data and machine learning engineering, natural language processing (NLP), computer vision, and agentic AI, to build scalable and robust systems that drive innovation and efficiency. You will be responsible for the entire AI lifecycle, from data acquisition and preprocessing to model development, deployment, and monitoring. Responsibilities Data and ML Engineering: Design and implement robust data pipelines to extract, transform, and load (ETL) data from diverse structured and unstructured sources (e.g., databases, APIs, text documents, images, videos). Develop and maintain scalable data storage and processing solutions. Perform comprehensive data cleaning, validation, and feature engineering to prepare data for machine learning models. Build and deploy machine learning models for a variety of business applications, including but not limited to process optimization and enterprise efficiency. Web Scraping and Document Processing: Implement web scraping solutions and utilize document processing libraries to extract and process data from various sources. NLP and Computer Vision: Develop and implement NLP models for tasks such as text classification, sentiment analysis, entity recognition, and language generation. Implement computer vision models for image classification, object detection, and image segmentation. Agentic AI Development Design and develop highly scalable production-ready code for agentic AI systems. Implement and integrate agentic AI solutions into existing workflows to automate complex tasks and improve decision-making. Develop and maintain agentic systems for data wrangling, supply chain optimization, and enterprise efficiency projects. Work with LLMs, and other related technologies to create agentic workflows. Integrate NLP and Computer Vision capabilities into agentic workflows to enhance their ability to understand and interact with diverse data sources. Model Development And Deployment Design and develop machine learning models and algorithms to solve simplified business problems. Evaluate and optimize model performance through rigorous testing and experimentation. Deploy and monitor machine learning models in production environments. Implement best practices for model versioning, reproducibility, and explainability. Optimize and deploy NLP and computer vision models for real-time inference. Communication And Collaboration Clearly articulate complex technical concepts to both technical and non-technical audiences. Demonstrate live coding proficiency and effectively explain your code and design decisions. Collaborate with cross-functional teams, including product managers, data scientists, and software engineers. Document code, models, and processes for knowledge sharing and maintainability. Qualifications Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, Computer Vision, or a related field. Proven experience in developing and deploying machine learning models, NLP models, and computer vision models, and data pipelines. Strong programming skills in Python and experience with relevant libraries (e.g., TensorFlow, PyTorch, scikit-learn, pandas, NumPy, Hugging Face Transformers, OpenCV, Pillow). Experience with cloud computing platforms (e.g., AWS, GCP, Azure). Experience with database technologies (e.g., SQL, NoSQL). Experience with agentic AI development and LLMs is highly desirable. Excellent problem-solving and analytical skills. Product Engineering background Ability to demonstrate live coding proficiency. Experience in productionizing ML models. Preferred Qualifications Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes). Experience with MLOps practices and tools. Experience with building RAG systems. Experience with deploying and optimizing models for edge devices. Experience with video processing and analysis. This job is provided by Shine.com Show more Show less

Posted 4 days ago

Apply

0.0 - 2.0 years

0 Lacs

Raipur, Chhattisgarh

On-site

Indeed logo

Company Name- Interbiz Consulting Pvt Ltd Position/Designation- Data Engineer Job Location- Raipur (C.G.) Mode- Work from office Experience- 2 to 5 Years We are seeking a talented and detail-oriented Data Engineer to join our growing Data & Analytics team. You will be responsible for building and maintaining robust, scalable data pipelines and infrastructure to support data-driven decision-making across the organization. Key Responsibilities Design and implement ETL/ELT data pipelines for structured and unstructured data using Azure Data Factory , Databricks , or Apache Spark . Work with Azure Blob Storage , Data Lake , and Synapse Analytics to build scalable data lakes and warehouses. Develop real-time data ingestion pipelines using Apache Kafka , Apache Flink , or Apache Beam . Build and schedule jobs using orchestration tools like Apache Airflow or Dagster . Perform data modeling using Kimball methodology for building dimensional models in Snowflake or other data warehouses. Implement data versioning and transformation using DBT and Apache Iceberg or Delta Lake . Manage data cataloging and lineage using tools like Marquez or Collibra . Collaborate with DevOps teams to containerize solutions using Docker , manage infrastructure with Terraform , and deploy on Kubernetes . Setup and maintain monitoring and alerting systems using Prometheus and Grafana for performance and reliability. Required Skills and Qualifications Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. [1–5+] years of experience in data engineering or related roles. Proficiency in Python , with strong knowledge of OOP and data structures & algorithms . Comfortable working in Linux environments for development and deployment. Strong command over SQL and understanding of relational (DBMS) and NoSQL databases. Solid experience with Apache Spark (PySpark/Scala). Familiarity with real-time processing tools like Kafka , Flink , or Beam . Hands-on experience with Airflow , Dagster , or similar orchestration tools. Deep experience with Microsoft Azure , especially Azure Data Factory , Blob Storage , Synapse , Azure Functions , etc. AZ-900 or other Azure certifications are a plus. Knowledge of dimensional modeling , Snowflake , Apache Iceberg , and Delta Lake . Understanding of modern Lakehouse architecture and related best practices. Familiarity with Marquez , Collibra , or other cataloging tools. Experience with Terraform , Docker , Kubernetes , and Jenkins or equivalent CI/CD tools. Proficiency in setting up dashboards and alerts with Prometheus and Grafana . Interested candidates may share their CV on swapna.rani@interbizconsulting.com or visit www.interbizconsulting.com Note:- Immediate joiner will be preferred. Job Type: Full-time Pay: From ₹25,000.00 per month Benefits: Food provided Health insurance Leave encashment Provident Fund Supplemental Pay: Yearly bonus Application Question(s): Do you have at least 2 years of work experience in Python? Do you have at least 2 years of work experience in Data Science? Are you from Raipur, Chhattisgarh? Are you willing to work for more than 2 years? What is your notice period? What is your current salary and what you are expecting? Work Location: In person

Posted 4 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Title: Data Engineer Location: Baner, Pune (Hybrid) 6 to 12 Months contract Responsibilities: Design, develop, and execute robust scalable data pipelines to extract, transform, and load data from on-premises SQL Server databases to GCP Cloud SQL PostgreSQL. Analyze existing SQL Server schemas, data types, and stored procedures, and plan for their conversion and optimization for the PostgreSQL environment. Implement and support data migration strategies from on-premise or legacy systems to cloud environments, primarily GCP. Implement rigorous data validation and quality checks before, during, and after migration to ensure data integrity and consistency. Collaborate closely with Database Administrators, application developers, and business analysts to understand source data structures and target requirements. Develop and maintain scripts (primarily Python or Java) for automating migration tasks, data validation, and post-migration data reconciliation. Identify and resolve data discrepancies, performance bottlenecks, and technical challenges encountered during the migration process. Document migration strategies, data mapping, transformation rules, and post-migration validation procedures. Support cutover activities and ensure minimal downtime during the transition phase. Apply data governance, security, and privacy standards across data assets in the cloud. Refactor SQL Server stored procedures and business logic for implementation in PostgreSQL or application layer where applicable. Leverage schema conversion tools (e.g., pgLoader, custom scripts) to automate and validate schema translation from SQL Server to PostgreSQL. Develop automated data validation and reconciliation scripts to ensure row-level parity and business logic integrity post-migration. Implement robust monitoring, logging, and alerting mechanisms to ensure pipeline reliability and quick failure resolution using GCP-native tools (e.g., Stackdriver/Cloud Monitoring). Must-Have Skills: Expert-level SQL proficiency across T-SQL (SQL Server) and PostgreSQL with strong hands-on experience in data transformation, query optimization, and relational database design. Solid understanding and hands-on experience working with Relational Databases. Strong experience in data engineering, with hands-on work on cloud, preferrably GCP. Experience with data migration techniques and strategies between different relational database platforms. Hands-on experience on any Cloud Data and Monitoring services such as Relational Database services, Data Pipeline services, Logging and monitoring services, - with one of the cloud providers - GCP, AWS or Azure. Experience with Python or Java for building and managing data pipelines with proficiency in data manipulation, scripting, and automation of data processes. Familiarity with ETL/ELT processes and orchestration tools like Cloud Composer (Airflow). Understanding of data modeling and schema design. Strong analytical and problem-solving skills, with a keen eye for data quality and integrity Experience with version control systems like Git. Good-to-Have Skills Exposure to database migration tools or services (e.g., AWS DMS, GCP Database Migration Service, or similar). Experience with real-time data processing using Pub/Sub. Experience with shell scripting. Exposure to CI/CD pipelines for deploying and maintaining data workflows. Familiarity with NoSQL databases and other GCP data services (e.g., Firestore, Bigtable). Show more Show less

Posted 4 days ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

EPM Developer Requirement Exp-5-10 Years Location- Hyderabad/Bangalore/Noida/Pune/Chennai JD- Over 5 years of experience in EPM consulting and implementation, specifically with EPM cloud products. Proficient in developing custom integrations using EPM Data Integration, EPM Integration Agent, Pipeline, Groovy Business Rules, and EPM Automate. Design and implement data integration processes to ensure seamless data flow between Oracle EPM and other enterprise systems. Development experience in end-to-end implementations in EPM On-premise & Cloud applications. In-depth functional knowledge of financial processes and related functionalities within the EPM domain. Practical experience in scripting languages such as Batch, Python, and PowerShell. Capable of thriving in a fast-paced environment and quickly mastering new concepts with minimal supervision. Familiarity with Service Requests (SRs) and My Oracle Support. Committed to staying current with the latest Oracle EPM technologies and best practices through ongoing professional development. Ability to work collaboratively with cross-functional teams and stakeholders. Strong problem-solving and analytical skills. Excellent communication and interpersonal skills. Ability to work independently and as part of a team. Technical Skills: Oracle EPM Cloud Hyperion Planning Financial Management Data Integration (ETL) SQL, PL/SQL Business Analysis Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description We are seeking a Tableau Developer with 5 years of experience in designing, developing, and deploying business intelligence solutions. The ideal candidate will have a deep understanding of data visualization principles, experience in data analysis, and a proven track record of delivering impactful dashboards and reports. You will work closely with stakeholders across the organization to turn complex data sets into easy-to-understand visual stories. Key Responsibilities Dashboard Development: Design, develop, and maintain interactive Tableau dashboards, reports, and visualizations that meet business needs and provide actionable insights. Ensure the scalability and performance of Tableau solutions by optimizing queries and visualizations. Data Analysis: Analyze and interpret complex data sets from multiple sources to identify trends, patterns, and insights. Collaborate with data analysts and data engineers to define data requirements, create data models, and prepare datasets for visualization. Stakeholder Collaboration: Work closely with business stakeholders to gather requirements, understand business objectives, and translate them into effective visual solutions. Present findings and recommendations to non-technical audiences, ensuring that insights are accessible and actionable. Create backlogs , stories and manage sprints in Jira. Data Integration: Connect Tableau to various data sources, including databases, cloud services, and APIs, ensuring data accuracy and consistency. Develop and maintain data extraction, transformation, and loading (ETL) processes as needed. Best Practices & Training: Stay up-to-date with Tableau’s latest features and industry trends to continuously improve the organization’s BI capabilities. Provide training and support to end-users to maximize the adoption and usage of Tableau across the organization. Qualifications Education: Bachelor’s degree in Computer Science, Information Systems, Data Science, or a related field. Experience: 5+ years of hands-on experience as a Tableau Developer. Proven experience in creating complex dashboards and visualizations using Tableau Desktop and Tableau Server. Strong understanding of SQL and experience in writing complex queries. Experience with data warehousing concepts, ETL processes, and data modeling. Skills: Proficiency in Tableau Desktop, Tableau Server, and Tableau Prep. Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy. Familiarity with other BI tools (e.g., Power BI, QlikView) is a plus. Excellent communication skills, with the ability to convey technical information to non-technical stakeholders. Experience with Agile project management tools like Jira, gathering requirements and creating stories. Preferred Qualifications Experience with programming languages like Python or R for advanced data manipulation. Knowledge of cloud platforms (e.g., AWS, Azure) and experience with cloud-based data sources. Tableau certification (e.g., Tableau Desktop Specialist, Tableau Server Certified Associate) is a plus. Skills: data visualization,data analysis,jira,azure,aws,tableau desktop,agile project management,tableau,sql,r,tableau prep,etl processes,tableau server,python Show more Show less

Posted 4 days ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

🚀 We’re Hiring: AEP Data Engineer 📍 Location: Remote | 💼 Experience: 6–8 Years 🕒 Contract: 6 Months (Extendable) Prior experience with Adobe Experience Platform (AEP) is a plus ! We’re looking for an experienced Data Engineer with strong GCP (Google Cloud Platform) skills and a background in ETL development and data warehouse migration . 🔧 What You’ll Do: Design and build scalable ETL pipelines and data workflows in GCP Migrate on-premise data warehouses to BigQuery and other GCP tools Collaborate with architects, data scientists, and stakeholders to deliver reliable data solutions Optimize performance, maintain data quality, and ensure smooth operations Participate in code reviews , CI/CD workflows, and Agile ceremonies. 🎯 What You Bring: 6–8 years in Data Engineering 3–4 years of hands-on experience in GCP tools: BigQuery, Dataflow, Cloud Composer, Pub/Sub Strong in SQL and Python (or similar language) Solid experience with ETL frameworks and data migration strategies Proficiency in version control (Git) and working in remote agile teams Excellent communication and ability to work independently AEP knowledge is a big plus 📩 Apply now or share your resume at Recruiter@systembender.com Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Manager, Data Engineering / Analytics & Business Analysis Overview Mastercard is the global technology company behind the world’s fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless®. We ensure every employee can be a part of something bigger and change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities. Role As a business analyst in the Program Office/Program Operations Team, you will develop data & analytics solutions that sit atop vast datasets gathered by Operational Applications in support of the Technology Division and broader enterprise. The challenge will be to create high-performance algorithms, cutting-edge analytical techniques and intuitive workflows that allow our users to derive insights from big data that in turn drive their businesses. All About You Knowledge in data engineering, with a strong background in ETL architecture, and data modeling. Working closely with data scientists and analysts to provide clean, accessible data for analysis. Proficiency in performance tuning across databases, ETL jobs, and SQL scripts. Skilled in enterprise metrics/monitoring using tools like Splunk, Grafana, Domo, SMSS, Other BI Related Tools Data-driven analytical mindset with a proven problem-solving track record. Agile methodologies experience. Proven track record in collaborating with cross-functional teams, stakeholders, and senior management Experience triaging, troubleshooting, and resolving technical issues to prevent & reduce consumer downtime. Bachelor’s degree in Computer Science, Software Engineering, or a related field. Equivalent practical experience will be considered. Ability to guide, mentor, and build skillset of junior team members. Advising on best practices and solution to blockers. Desire to drive for automation to reduce manual toil/risk for continuous data consumption/Always-on reporting Excellent written and verbal communication skills with the ability to convey complex technical concepts to a diverse audience. Ability to deal with multiple and competing priorities, structure and manage work streams, and set clear expectations and deadlines. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-249374 Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Data Migration Specialist – EWM 9.5 to Decentralized EWM on SAP S/4HANA Location: Pune | Experience: 5+ Years Job Summary: We are seeking a skilled Data Migration Specialist to lead the migration from SAP EWM 9.5 Business Suite to decentralized EWM on SAP S/4HANA. The role involves planning, executing, and validating the entire migration process, ensuring data integrity and seamless system integration. Key Responsibilities: Develop and execute a comprehensive data migration plan. Extract, transform, and load data from EWM 9.5 to S/4HANA EWM. Prepare systems, ensure ERP integration, and conduct validation tests. Collaborate with cross-functional teams and provide post-migration support. Document all migration processes and data mappings. Requirements: 5+ years of data migration experience, with 2+ years in SAP EWM and S/4HANA. Proficient in ETL processes and SAP Migration Cockpit. Strong understanding of EWM and S/4HANA architecture. Excellent problem-solving and communication skills. Preferred: SAP Certification (EWM or S/4HANA) Experience with SAP Fiori/UI5 and SAP BTP ABAP environment Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Summary Skill Name: Power BI with GCP developer Experience : 7 - 10 yrs Mandatory Skills : Power BI + GCP(Big Query) Required Skills & Qualifications: Power BI Expertise: Strong hands-on experience in Power BI development, including report/dashboard creation, DAX, Power Query, and custom visualizations. Semantic Model Knowledge: Proficiency in building and managing semantic models within Power BI to ensure consistency and user-friendly data exploration. GCP Tools: Practical experience with Google Cloud Platform tools, particularly BigQuery, Dataflow, and Cloud Storage, for managing large datasets and data integration. ETL Processes: Experience in designing and managing ETL (Extract, Transform, Load) processes using GCP services. SQL & Data Modeling: Solid skills in SQL and data modeling, particularly for BI solutions and creating relationships between different data sources. Cloud Data Integration: Familiarity with integrating cloud-based data sources into Power BI, including knowledge of best practices for handling cloud storage and data pipelines. Data Analysis & Troubleshooting: Strong problem-solving abilities, including diagnosing and resolving issues in data models, reports, or data integration pipelines. Communication & Collaboration: Excellent communication skills to work effectively with cross Show more Show less

Posted 4 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Dear Candidates, TCS is looking for S4 HANA BW Data Sphere consultant Experience: 7-10 years Location: Gurgaon Role : - Strong technical skills in SAP BW/4HANA, SAP Datasphere, and related technologies like SAP Analytics Cloud (SAC). Ability to design, implement and support data warehousing and reporting solutions, including data integration, modeling, and visualization. Key Skills and Responsibilities Data Modeling and Design: Designing and implementing data models, data flows, and data integration processes within SAP Datasphere, including creating CDS views and query views. Data Integration: Implementing data extraction, transformation, and loading (ETL) processes, handling data from various sources (SAP and non-SAP), and ensuring data quality and consistency. Data Sphere Expertise: Developing and maintaining Spaces, Local Tables, Views, Data Flow, Replication Flow, Transformation Flow, DAC, Task Chain, and Intelligent Lookup within SAP Datasphere. Reporting and Analytics: Building KPIs, creating analytical models, and developing dashboards and reports using SAP Analytics Cloud (SAC) and other relevant tools. Technical Skills: Strong ABAP, AMDP, SQL, and Python skills, with experience in HANA views, AMDP procedures, and hybrid architecture. Solution Design and Implementation: Designing and implementing data warehousing and reporting solutions, collaborating with cross-functional teams to gather functional requirements, and leading solution architecture workshops. Problem-solving and Communication: Excellent problem-solving, communication, and collaboration skills to troubleshoot issues, document solutions, and work effectively within teams. Experience: 2-4 years of hands-on experience with SAP Datasphere, 5+ years with SAP BW/4HANA, and experience with SAP S/4HANA or ECC functional areas and data models. Other Skills: Knowledge of BW Bridge, SAP Business Objects, and various SAP S/4HANA or ECC functional areas. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: SAS Monitoring Specialist Experience Level: 5 Years Location: Hyderabad Job Type: Full-time Job Summary: We are seeking a skilled and detail-oriented SAS Monitoring Specialist with 5 years of hands-on experience in SAS environments. The ideal candidate will be responsible for monitoring, maintaining, and optimizing SAS platforms to ensure continuous performance, availability, and data integrity. You will work closely with IT, data engineering, and analytics teams to ensure smooth operations of all SAS systems and processes. Key Responsibilities: Monitor SAS servers and environments (SAS 9.4, SAS Grid, Viya) for performance, stability, and capacity. Analyze logs and system alerts to proactively identify potential issues and resolve them promptly. Manage and troubleshoot scheduled SAS jobs and batch processes. Support daily health checks, user access issues, and performance tuning. Collaborate with SAS Admins and Infrastructure teams to manage upgrades, patches, and migrations. Automate monitoring tasks using scripts (Shell, Python, or SAS-based). Create dashboards and reports to track system performance and job success/failure rates. Document system procedures, incidents, and resolution steps. Maintain compliance with internal policies and external regulations regarding data usage and security. Required Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of experience in SAS monitoring or administration. Strong knowledge of SAS tools (SAS 9.4, Viya, SAS Management Console, Enterprise Guide). Experience with SAS job scheduling tools like LSF, Control-M, or similar. Familiarity with operating systems (Linux/UNIX/Windows) and system-level monitoring. Proficiency in scripting languages for automation (Shell, Python, PowerShell, or SAS Macros). Solid understanding of performance tuning and root cause analysis. Excellent problem-solving and communication skills. Preferred Skills: Experience with cloud-based SAS platforms (AWS, Azure). Understanding of data integration and ETL processes in SAS. Knowledge of monitoring tools like Splunk, Nagios, or Prometheus. ITIL certification or knowledge of ITSM tools (ServiceNow, BMC Remedy). Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job description: Roles & Responsibilities: -Good understanding & writing skill of SQL code -Perform data analytics and ETL development -Perform Descriptive Analytics & Reporting -Perform peer code reviews, design documents & test cases -Support systems currently live and deployed for customers -Build knowledge repository & cloud capabilities -Excellent troubleshooting, attention to detail in fast-paced setting. -Excellent communication Skill is mandatory to work directly with the client. -Work as part of a team of Engineers/Consultants that globally ensure to provide customer support. -Understanding Agile Job Title : Gcp Python Key Skills : GCP cloud storage, Data proc, Big query, SQL - Strong SQL & Advanced SQL,Spark writing skills on Pyspark,DWH ,Python,GIT,Any GCP Certification Job Locations : Any Virtusa Experience : 4 - 6 Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contract Notice Period : Immediate - 10 Days Payroll : people prime Worldwide Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Role :- SAP-Data Analyst Job Location : -Noida/Gurgaon/Hyderabad/Bangalore/Pune Experience: -5 Years Job Roles & Responsibilities: - Collaborate with Finance & FBT Teams: Drive all data-related activities for the finance SAP deployment, ensuring alignment between business and technical teams. Lead Data Cleansing & Enrichment: Analyze finance data, identify gaps, and guide enrichment initiatives to prepare data for migration. Define Data Design & Requirements: Partner with central data and process teams to establish source-to-target mapping and validation criteria. Coordinate ETL & DC Cycles: Work closely with central program resources to execute ETL processes and ensure data is loaded accurately during Data Center cycles. Job Skills & Requirements: - Excellent communication and stakeholder management abilities, particularly in translating business needs into data solutions. Deep understanding of SAP finance processes and data structures (e.g., GL, AR, AP, asset accounting, FI‑CO). Minimum 5 years’ hands-on experience in SAP data migration projects. Proven track record in large-scale, global SAP deployments, coordinating multiple stakeholders and partners. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less

Posted 4 days ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Overview: NRoot Labs is a cutting-edge technology company that specializes in delivering innovative data and cloud solutions for businesses worldwide. Our team of talented engineers and developers work collaboratively to create scalable and robust applications that drive our clients' success. Our contemporary work culture coupled with an employee-friendly environment makes NRoot Labs an amazing place to work. Job Title: Data Engineer Location: Chennai Experience: 4–6 Years Employment Type: Full-time Educational Qualification: B.Tech/M.Tech/B.Sc/MCA/MS Degree Role Overview: We are looking for a hands-on and technically strong Data Engineers with 4–6 years of experience in data engineering, ETL development and cloud platforms. The ideal candidate should have deep SQL expertise and a solid background in building scalable data pipelines and architectures. This role involves leading a small team and working closely with cross-functional stakeholders. Key Responsibilities: Lead the design and development of scalable, secure, and high-performance data pipelines. Develop and maintain efficient SQL queries, procedures, and database solutions. Design and manage ETL processes to integrate data from various sources. Work with cloud platforms (Azure or AWS) to implement and support cloud-based data solutions. Ensure data accuracy, consistency, and quality across all systems. Collaborate with business and technical teams to translate requirements into data solutions. Provide technical guidance and mentorship to junior team members. Drive best practices in data modelling, coding, performance tuning, and data governance. Required Skills: Strong SQL skills with experience in performance optimization and complex query writing. 4–6 years of experience in ETL development (e.g., SSIS or equivalent tools). Hands-on experience with cloud data services (Azure or AWS). Solid understanding of data warehousing, data modelling, and architecture principles. Experience in scripting or automation using Python or similar languages is a plus. Proven ability to manage tasks independently and lead small teams. Excellent problem-solving and communication skills. Preferred: Cloud certifications (e.g., Azure Data Engineer, AWS Data Analytics). Familiarity with CI/CD practices for data pipelines. Exposure to data lake, delta lake, or big data ecosystems Perks of working at NRoot Labs: From competitive salary recognizing your hard work and talent to flexible work life balance, comprehensive health insurance coverage, wellness programs, limitless growth opportunities, fun filled work environment, yummy snacks and beverages to casual dress codes, we strive to create an environment where you can thrive both personally and professionally! Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Data Engineer – Databricks, Delta Live Tables, Data Pipelines Location: Bhopal / Hyderabad / Pune (On-site) Experience Required: 5+ Years Employment Type: Full-Time Job Summary: We are seeking a skilled and experienced Data Engineer with a strong background in designing and building data pipelines using Databricks and Delta Live Tables. The ideal candidate should have hands-on experience in managing large-scale data engineering workloads and building scalable, reliable data solutions in cloud environments. Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines using Databricks and Delta Live Tables . Work with structured and unstructured data to enable analytics and reporting use cases. Implement data ingestion , transformation , and cleansing processes. Collaborate with Data Architects, Analysts, and Data Scientists to ensure data quality and integrity. Monitor data pipelines and troubleshoot issues to ensure high availability and performance. Optimize queries and data flows to reduce costs and increase efficiency. Ensure best practices in data security, governance, and compliance. Document architecture, processes, and standards. Required Skills: Minimum 5 years of hands-on experience in data engineering . Proficient in Apache Spark , Databricks , Delta Lake , and Delta Live Tables . Strong programming skills in Python or Scala . Experience with cloud platforms such as Azure , AWS , or GCP . Proficient in SQL for data manipulation and analysis. Experience with ETL/ELT pipelines , data wrangling , and workflow orchestration tools (e.g., Airflow, ADF). Understanding of data warehousing , big data ecosystems , and data modeling concepts. Familiarity with CI/CD processes in a data engineering context. Nice to Have: Experience with real-time data processing using tools like Kafka or Kinesis. Familiarity with machine learning model deployment in data pipelines. Experience working in an Agile environment. Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About the Role: We are seeking talented and detail-oriented Data Engineers with expertise in Informatica MDM to join our fast-growing data engineering team. Depending on your experience, you’ll join as a Software Engineer or Senior Software Engineer, contributing to the design, development, and maintenance of enterprise data management solutions that support our business objectives. As a key player, you will be responsible for building reliable data pipelines, working with master data management, and ensuring data quality, governance, and integration across systems. Responsibilities: Design, develop, and implement data pipelines using ETL tools like Informatica PowerCenter, IICS, etc., and MDM solutions using Informatica MDM. Develop and maintain batch and real-time data integration workflows. Collaborate with data architects, business analysts, and stakeholders to understand data requirements. Perform data profiling, data quality assessments, and master data matching/merging. Implement governance, stewardship, and metadata management practices. Optimize the performance of Informatica MDM Hub, IDD, and associated components. Write complex SQL queries and stored procedures as needed. Senior Software Engineer – Additional Responsibilities: Lead design discussions and code reviews; mentor junior engineers. Architect scalable data integration solutions using Informatica and complementary tools. Drive adoption of best practices in data modeling, governance, and engineering. Work closely with cross-functional teams to shape the data strategy. Required Qualifications: Software Engineer: Bachelor’s degree in Computer Science, Information Systems, or related field. 2–4 years of experience with Informatica MDM (Customer 360, Business Entity Services, Match/Merge rules). Strong SQL and data modeling skills. Familiarity with ETL concepts, REST APIs, and data integration tools. Understanding of data governance and quality frameworks. Senior Software Engineer: Bachelor’s or Master’s in Computer Science, Data Engineering, or related field. 4+ years of experience in Informatica MDM, with at least 2 years in a lead role. Proven track record of designing scalable MDM solutions in large-scale environments. Strong leadership, communication, and stakeholder management skills. Hands-on experience with data lakes, cloud platforms (AWS, Azure, or GCP), and big data tools is a plus. Preferred Skills (Nice to Have): Experience with other Informatica products (IDQ, PowerCenter). Exposure to cloud MDM platforms or cloud data integration tools. Agile/Scrum development experience. Knowledge of industry-standard data security and compliance practices. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Role :- SAP-Data Analyst Job Location : -Noida/Gurgaon/Hyderabad/Bangalore/Pune Experience: -5 Years Job Roles & Responsibilities: - Collaborate with Finance & FBT Teams: Drive all data-related activities for the finance SAP deployment, ensuring alignment between business and technical teams. Lead Data Cleansing & Enrichment: Analyze finance data, identify gaps, and guide enrichment initiatives to prepare data for migration. Define Data Design & Requirements: Partner with central data and process teams to establish source-to-target mapping and validation criteria. Coordinate ETL & DC Cycles: Work closely with central program resources to execute ETL processes and ensure data is loaded accurately during Data Center cycles. Job Skills & Requirements: - Excellent communication and stakeholder management abilities, particularly in translating business needs into data solutions. Deep understanding of SAP finance processes and data structures (e.g., GL, AR, AP, asset accounting, FI‑CO). Minimum 5 years’ hands-on experience in SAP data migration projects. Proven track record in large-scale, global SAP deployments, coordinating multiple stakeholders and partners. Show more Show less

Posted 4 days ago

Apply

4.0 - 6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Title: Sr. Data Engineer Location: Office-Based (Ahmedabad, India) About Hitech Hitech is a leading provider of Data, Engineering Services, and Business Process Solutions. With robust delivery centers in India and global sales offices in the USA, UK, and the Netherlands, we enable digital transformation for clients across industries including Manufacturing, Real Estate, and e-Commerce. Our Data Solutions practice integrates automation, digitalization, and outsourcing to deliver measurable business outcomes. We are expanding our engineering team and looking for an experienced Sr. Data Engineer to design scalable data pipelines, support ML model deployment, and enable insight-driven decisions. Position Summary We are seeking a Data Engineer / Lead Data Engineer with deep experience in data architecture, ETL pipelines, and advanced analytics support. This role is crucial for designing robust pipelines to process structured and unstructured data, integrate ML models, and ensure data reliability. The ideal candidate will be proficient in Python, R, SQL, and cloud-based tools, and possess hands-on experience in creating end-to-end data engineering solutions that support data science and analytics teams. Key Responsibilities Design and optimize data pipelines to ingest, transform, and load data from diverse sources. Build programmatic ETL pipelines using SQL and related platforms. Understand complex data structures and perform data transformation effectively. Develop and support ML models such as Random Forest, SVM, Clustering, Regression, etc. Create and manage scalable, secure data warehouses and data lakes. Collaborate with data scientists to structure data for analysis and modeling. Define solution architecture for layered data stacks ensuring high data quality. Develop design artifacts including data flow diagrams, models, and functional documents. Work with technologies such as Python, R, SQL, MS Office, and SageMaker. Conduct data profiling, sampling, and testing to ensure reliability. Collaborate with business stakeholders to identify and address data use cases. Qualifications & Experience 4 to 6 years of experience in data engineering, ETL development, or database administration. Bachelor’s degree in Mathematics, Computer Science, or Engineering (B.Tech/B.E.). Postgraduate qualification in Data Science or related discipline preferred. Strong proficiency in Python, SQL, Advanced MS Office tools, and R. Familiarity with ML concepts and integrating models into pipelines. Experience with NoSQL systems like MongoDB, Cassandra, or HBase. Knowledge of Snowflake, Databricks, and other cloud-based data tools. ETL tool experience and understanding of data integration best practices. Data modeling skills for relational and NoSQL databases. Knowledge of Hadoop, Spark, and scalable data processing frameworks. Experience with SciKit, TensorFlow, Pytorch, GPT, PySpark, etc. Ability to build web scrapers and collect data from APIs. Experience with Airflow or similar tools for pipeline automation. Strong SQL performance tuning skills in large-scale environments. What We Offer Competitive compensation package based on skills and experience. Opportunity to work with international clients and contribute to high-impact data projects. Continuous learning and professional growth within a tech-forward organization. Collaborative and inclusive work environment. If you're passionate about building data-driven infrastructure to fuel analytics and AI applications, we look forward to connecting with you. Anand Soni Hitech Digital Solutions Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: QA Tester Data Job Type: Full-time Location: On-site - Hyderabad, Pune or New Delhi Job Summary: Join our customer’s team as a dedicated ETL Tester where your expertise will drive the quality and reliability of crucial business data solutions. As an integral part of our testing group, you will focus on ETL Testing while engaging in automation, API, and MDM testing to support robust, end-to-end data validation and integration. We value professionals who demonstrate strong written and verbal communication and a passion for delivering high-quality solutions. Key Responsibilities: Design, develop, and execute comprehensive ETL test cases, scenarios, and scripts to validate data extraction, transformation, and loading processes. Collaborate with data engineers, business analysts, and QA peers to clarify requirements and ensure accurate data mapping, lineage, and transformations. Perform functional, automation, API, and MDM testing to support a holistic approach to quality assurance. Utilize tools such as Selenium to drive automation efforts for repeatable and scalable ETL testing processes. Identify, document, and track defects while proactively communicating risks and issues to stakeholders with clarity and detail. Work on continuous improvement initiatives to enhance test coverage, efficiency, and effectiveness within the ETL testing framework. Create and maintain detailed documentation for test processes and outcomes, supporting both internal knowledge sharing and compliance requirements. Required Skills and Qualifications: Strong hands-on experience in ETL testing, including understanding of ETL tools and processes. Proficiency in automation testing using Selenium or similar frameworks. Experience in API testing, functional testing, and MDM testing. Excellent written and verbal communication skills, with an ability to articulate technical concepts clearly to diverse audiences. Solid analytical and problem-solving abilities to troubleshoot data and process issues. Attention to detail and a commitment to high-quality deliverables. Ability to thrive in a collaborative, fast-paced team environment on-site at Hyderabad. Preferred Qualifications: Prior experience working in large-scale data environments or within MDM projects. Familiarity with data warehousing concepts, SQL, and data migration best practices. ISTQB or related QA/testing certification. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Client: Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. Job Title : Quality Assurance engineer – Python with Robot Framework,SQL, Unix Key Skills : Python with Robot Framework,SQL, Unix Job Locations : PAN India Experience : 5+ Years. Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate - 15 Day.. Job Description: Python with Robot Framework,SQL, Unix 5 to 10 years’ Hands on experience in Robot framework At least 5+ years of working knowledge in Python At least 4+ years of hands - on experience in data testing ETL Testing At least 4+ years of hands-on experience in Database like MYSQL,SQL Server, Oracle Must be well versed with Agile Methodology Proficiency in coding in Python. Hands-on Framework creation/ improvising test frameworks for automation. Experience in creating self-serve tools - Good experience working with Robot Framework. Should be able to work with git. Demonstrated knowledge of any of the CI/CD (gitlab, jenkins. ) Demonstrated knowledge of RDBMS, and SQL queries. Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Positions Title : Data Engineer Experience Range : 4+ Years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Primary Skills Required: Kafka, Spark, Python, SQL, Shell Scripting, Databricks, Snowlflake, AWS and Azure Cloud What you will do: 1. Provide Expertise and Guidance as a Senior Experienced Engineer in solution design and strategy for Data Lake and analytics-oriented Data Operations. 2. Design, develop, and implement end-to-end data solutions (storage, integration, processing, access) in hypervendor platforms like AWS and Azure. 3. Architect and implement Integration, ETL, and data movement solutions using SQL Server Integration Services (SSIS)/ C#, AWS Glue, MSK and/or Confluent, and other COTS technologies. 4. Prepare documentation and designs for data solutions and applications. 5. Design and implement distributed analytics platforms for analyst teams. 6. Design and implement streaming solutions using Snowflake, Kafka and Confluent. 7. Migrate data from traditional relational database systems (ex. SQL Server, Postgres) to AWS relational databases such as Amazon RDS, Aurora, Redshift, DynamoDB, Cloudera, Snowflake, Databricks, etc. Who you are: 1. Bachelor's degree in Computer Science, Software Engineering. 2. 4+ Years of experience in the Data domain as an Engineer and Architect. 3. Demonstrated sense of ownership and accountability in delivering high-quality data solutions independently or with minimal handholding. 4. Ability to thrive in a dynamic environment, adapting to evolving requirements and challenges. 5. A solid understanding of AWS and Azure storage solutions such as S3, EFS, and EBS. 6. A solid understanding of AWS and Azure compute solutions such as EC2. 7. Experience implementing solutions on AWS and Azure relational databases such as MSSQL, SSIS, Amazon Redshift, RDS, and Aurora. 8. Experience implementing solutions leveraging ElastiCache and DynamoDB. 9. Experience designing and implementing Enterprise Data Warehouse, Data Marts/Lakes. 10. Experience with Star or Snowflake Schema. 11. Experience with R or Python and other emerging technologies in D&A. 12. Understanding of Slowly Changing Dimensions and Data Vault Model. AWS and Azure Certifications are preferred Show more Show less

Posted 4 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Data Testing Engineer Exp : 8+ years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Job Description : Develop, maintain, and execute test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. ● Test ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules ● Perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues. ● Develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts. ● Conduct testing in cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Snowflake), ensuring performance and scalability. ● Familiarity with ETL testing tools and frameworks (e.g., Informatica, Talend, dbt). ● Experience with scripting languages to automate data testing. ● Familiarity with data visualization tools like Tableau, Power BI, or Looker Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. Job Title: Python with ETL Testing Location: Hyderabad Experience: 5+ Years Job Type : Contract to hire. Notice Period: Immediate joiners. Mandatory Skills: 5 to 10 years of experience in relevant areas At least 3+ years of working knowledge in Python At least 2+ years of hands - on experience in data testing ETL Testing At least 2+ years of hands-on experience in Database like MYSQL,SQL Server, Oracle Must be well versed with Agile Methodology Show more Show less

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies