Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
15 - 30 Lacs
Visakhapatnam, Andhra Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
4.0 years
15 - 30 Lacs
Chandigarh, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
4.0 years
15 - 30 Lacs
Dehradun, Uttarakhand, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
4.0 years
15 - 30 Lacs
Mysore, Karnataka, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
4.0 years
15 - 30 Lacs
Thiruvananthapuram, Kerala, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
4.0 years
15 - 30 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
4.0 years
15 - 30 Lacs
Patna, Bihar, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Position: We are conducting an in-person hiring drive for the position of Data Engineer (Azure Data Bricks) in Pune & Bengaluru 2nd August 2025. Interview Location is mentioned below: Pune: Persistent Systems,– 9a, Aryabhata-Pingala, 12, Kashibai Khilare Path, Marg, Erandwane, Pune, Maharashtra 411004. Bangalore: Persistent Systems, The Cube at Karle Town Center Rd, Dada Mastan Layout, Manayata Tech Park, Nagavara, Bengaluru, Karnataka 560024. We are looking for an experienced Azure Data Engineer to join our growing team. The ideal candidate will have a strong background in working with Azure Databricks, DBT, Python/PySpark, SQL. You will work closely with our engineers and business teams to ensure optimal performance, scalability, and availability of our data pipelines. Role: Data Engineer (Azure Data Bricks) Job Location: Pune & Bengaluru Experience: 4+ Years Job Type: Full Time Employment What You'll Do: Design and implement complex, scalable data pipelines for ingestion, processing, and transformation using Azure technologies. Collaborate with Architects, Data Analysts, and Business Analysts to understand data requirements and develop efficient workflows. Develop and manage data storage solutions including Azure SQL Database, Data Lake, and Blob Storage. Leverage Azure Data Factory and other cloud-native tools to build and maintain ETL processes. Conduct unit testing and ensure quality of data pipelines; mentor junior engineers and review their deliverables. Monitor pipeline performance, troubleshoot issues, and provide regular status updates. Optimize data workflows for performance and cost-efficiency; implement automation to reduce manual effort. Expertise You'll Bring: Strong experience with Azure and Databricks Experience with Python /Pyspark Experience in SQL Database. Good to have experience in DBT and Dremio Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job ID: Pyt-ETP-Ban-1095 Location: Pune Company Overview Bridgenext is a Global consulting company that provides technology-empowered business solutions for world-class organizations. Our Global Workforce of over 800 consultants provide best in class services to our clients to realize their digital transformation journey. Our clients span the emerging, mid-market and enterprise space. With multiple offices worldwide, we are uniquely positioned to deliver digital solutions to our clients leveraging Microsoft, Java and Open Source with a focus on Mobility, Cloud, Data Engineering and Intelligent Automation. Emtec’s singular mission is to create “Clients for Life” – long-term relationships that deliver rapid, meaningful, and lasting business value. At Bridgenext, we have a unique blend of Corporate and Entrepreneurial cultures. This is where you would have an opportunity to drive business value for clients while you innovate and continue to grow and have fun while doing it. You would work with team members who are vibrant, smart and passionate and they bring their passion to all that they do – whether it’s learning, giving back to our communities or always going the extra mile for our client. Position Description We are looking for members with hands-on Data Engineering experience who will work on the internal and customer-based projects for Bridgenext. We are looking for someone who cares about the quality of code and who is passionate about providing the best solution to meet the client needs and anticipate their future needs based on an understanding of the market. Someone who worked on Hadoop projects including processing and data representation using various AWS Services. Must Have Skills 4-8 years of overall experience Strong programming experience with Python and ability to write modular code following best practices in python which is backed by unit tests with high degree of coverage. Knowledge of source control(Git/Gitlabs) Understanding of deployment patterns along with knowledge of CI/CD and build tools Knowledge of Kubernetes concepts and commands is a must Knowledge of monitoring and alerting tools like Grafana, Open telemetry is a must Knowledge of Astro/Airflow is plus Knowledge of data governance is a plus Experience with Cloud providers, preferably AWS Experience with PySpark, Snowflake and DBT good to have. Professional Skills Solid written, verbal, and presentation communication skills Strong team and individual player Maintains composure during all types of situations and is collaborative by nature High standards of professionalism, consistently producing high-quality results Self-sufficient, independent requiring very little supervision or intervention Demonstrate flexibility and openness to bring creative solutions to address issues
Posted 1 day ago
4.0 - 7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Skills : Bigdata, Pyspark, Hive, Spark Optimization Good to have : GCP Experience: 4 to 7 years Roles & Responsibilities Skills : Bigdata, Pyspark, Hive, Spark Optimization Good to have : GCP
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Senior Engineer: Job Title: Senior Machine Learning & Data Engineer We are looking for a highly skilled and experienced Machine Learning & Data Engineer to join our team. This hybrid role blends the responsibilities of a data engineer and a machine learning engineer, with a strong emphasis on Python development. You will be instrumental in designing scalable data pipelines, building and deploying ML/NLP models, and enabling data-driven decision-making across the organization. Key Responsibilities Data Engineering & Infrastructure Design and implement robust ETL pipelines and data integration workflows using SQL, NoSQL, and big data technologies (e.g., Spark, Hadoop). Optimize data storage and retrieval using relational and non-relational databases (e.g., PostgreSQL, MongoDB, Cassandra). Ensure data quality, validation, and governance across systems. Develop and maintain data models and documentation for data flows and architecture. Machine Learning & NLP Build, fine-tune, and deploy ML/NLP models using frameworks like TensorFlow, PyTorch, and Scikit-learn. Apply advanced NLP techniques including Transformers, BERT, and LLM fine-tuning. Implement Retrieval-Augmented Generation (RAG) pipelines using LangChain, LlamaIndex, and vector databases (e.g., FAISS, Milvus). Operationalize ML models using APIs, model registries (e.g., Hugging Face), and cloud services (e.g., SageMaker, Azure ML). Python Development Develop scalable backend services using Python frameworks such as FastAPI, Flask, or Django. Automate data workflows and model training pipelines using Python libraries (e.g., Pandas, NumPy, SQLAlchemy). Collaborate with cross-functional teams to integrate ML solutions into production systems. Collaboration & Communication Work closely with data scientists, analysts, and software engineers in Agile/Scrum teams. Translate business requirements into technical solutions. Maintain clean, well-documented code and contribute to knowledge sharing. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience in both data engineering and machine learning roles. Strong Python programming skills and experience with modern Python libraries and frameworks. Deep understanding of ML/NLP concepts and practical experience with LLMs and RAG architectures. Proficiency in SQL and experience with both SQL and NoSQL databases. Experience with big data tools (e.g., Spark, PySpark) and cloud platforms (AWS, Azure). Familiarity with data visualization tools like Power BI or Tableau. Excellent problem-solving, communication, and collaboration skills. Engineer: Job Title : Automation Engineer Job Description We are seeking a skilled and experienced Automation Engineer to join our team. As a C#/Python Developer, you will play a pivotal role in developing and deploying advanced solutions to drive our Product Test automation. You will collaborate closely with Testers, product managers, and stakeholders to ensure the successful implementation and operation of Automation solutions. The ideal candidate will have a strong background in API development with C# programming and python, with experience in deploying scalable solutions. Responsibilities Design, develop, and maintain core APIs using mainly C#. Collaborate with cross-functional teams to understand requirements and implement API solutions. Create and execute unit tests for APIs to ensure software quality. Identify, analyze, and troubleshoot issues in API development and testing. Continuously improve and optimize API development processes. Document API specifications, procedures, and results. Stay updated with the latest industry trends and technologies in API development. Requirements Bachelor's degree in Computer Science, Engineering, or related field. Proven experience in developing APIs and scripts/apps using C# and python. Knowledge in python is a plus. Experience in using visual studio for development Experience in wireless domain will be a plus Strong understanding of software testing principles and methodologies. Proficiency in C# programming language. Experience with Test Automation tools and best practices Familiarity with CI/CD pipelines and version control systems (e.g., Perforce). Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 1 day ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Job title: Data Engineer ( Python + PySpark + SQL ) Candidate Specification: Minimum 6 to 8 years of experience in Data Engineer Job Description Data Engineer with strong expertise in Python, PySpark, and SQL. Design, develop, and maintain robust data pipelines using PySpark and Python. Strong understanding of SQL and relational databases (e.g., PostgreSQL, MySQL, SQL Server). Proficiency in Python for data engineering tasks and scripting. Hands-on experience with PySpark in distributed data processing environments. Strong command of SQL for data manipulation and querying large datasets. Skills Required RoleData Engineer ( Python + PySpark + SQL ) Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education Employment TypeFull Time, Permanent Key Skills DATA ENGINEER PYTHON PY SPARK SQL Other Information Job CodeGO/JC/689/2025 Recruiter NameSheena Rakesh
Posted 1 day ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: AWS Data Engineer (Databricks Focus) Position Overview: We are seeking a dynamic and innovative AWS Data Engineer with strong Databricks expertise to join our growing team. The ideal candidate will have a balanced mix of data engineering proficiency and application development skills, ready to own tasks, innovate solutions, and proactively drive technical advancements. Key Responsibilities: Design, develop, and optimize data pipelines using Databricks (PySpark) to manage and transform data efficiently. Implement APIs leveraging AWS AppSync and Lambda functions to interact effectively with AWS Neptune and AWS OpenSearch. Collaborate closely with front-end developers (React on AWS CloudFront) and cross-functional teams to enhance the overall application architecture. Ensure adherence to best practices around AWS security services, particularly IAM, ensuring secure and efficient application development. Proactively research, recommend, and implement innovative solutions to enhance performance, scalability, and reliability of data solutions. Own assigned tasks and projects end-to-end, demonstrating autonomy and accountability. Qualifications: Proven 10year of experience in Databricks and PySpark for building scalable data pipelines. Experience with AWS Neptune and AWS OpenSearch. Solution-level understanding of AWS AppSync. Solid understanding of AWS ecosystem services including Lambda, IAM, and CloudFront. Strong capabilities in API development and integration within cloud architectures. Experience or familiarity with React.js-based front-end applications is beneficial. Excellent analytical, problem-solving, and debugging skills. Ability to independently research and develop innovative solutions. Strong communication skills with a proactive mindset and ability to collaborate effectively across teams. Preferred Qualifications: AWS Certification (Solutions Architect Associate, Developer Associate, Data Analytics Specialty) preferred. Experience with graph databases and search platforms in a production environment.
Posted 1 day ago
7.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Position : Senior Software Engineer - 7 to 10 years exp (Python and Golang) Work Mode - Remote Years of Experience: 7- 10years (5+ years exp in Python) Office Location - SB Road, Pune, Remote (for other locations) Qualifications – Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Responsibilities & Skills: ● Design, develop, and maintain high-quality software applications using Python and the Django framework. ● Collaborate with cross-functional teams to define, design, and ship new features and enhancements. ● Integrate third-party APIs (REST, SOAP, streaming services) into the existing product. ● Optimize application performance and ensure scalability and reliability. ● Write clean, maintainable, and efficient code, following best practices and coding standards. ● Participate in code reviews and provide constructive feedback to peers. ● Troubleshoot and debug applications, identifying root causes of issues. ● Stay current with industry trends, technologies, and best practices in software development. Required Skills (Python): ● Bachelor’s or Master’s degree in Computer Science or related field from IIT, NIT, or any other reputed institute. ● 3-10 years of experience in software development, with at least 4 years of background in Python and Django . ● Working knowledge of Golang (Mandatory) ● Experience integrating third-party APIs (REST, SOAP, streaming services) into applications. ● Familiarity with database technologies, particularly MySQL(must have) and HBase.(nice of have) ● Experience with message brokers like Kafka (must) , Rabbitmq and Redis ● Experience on Version control systems such as Github ● Familiarity with RESTful APIs and integration of third-party APIs. ● Strong understanding of software development methodologies, particularly Agile ● Demonstrable experience with writing unit and functional tests ● Excellent problem-solving skills and ability to work collaboratively in a team environment. ● Experience with database systems such as PostgreSQL, MySQL, or MongoDB. Good To Have: ● Experience with cloud infrastructure like AWS/GCP or other cloud service provider experience ● Knowledge on IEEE 2030.5 standard (Protocol) ● Knowledge on ● Serverless architecture, preferably AWS Lambda ● Experience with PySpark, Pandas, Scipy, Numpy libraries is a plus Experience in microservices architecture ● Solid CI/CD experience ● You are a Git guru and revel in collaborative workflows ● You work on the command line confidently and are familiar with all the goodies that the linux toolkit can provide ● Knowledge of modern authorization mechanisms, such as JSON Web Token ● Good to have front end technologies like - ReactJS, NodeJS
Posted 1 day ago
0 years
0 Lacs
Delhi, India
On-site
Job Title: Azure Data Engineer Location: Noida Sec-132 Job Description: 1. Strong experience in Azure - Azussre Data Factory, Azure Data Lake, Azure Data bricks 2. Good at Cosmos DB, Azure SQL data warehouse/synapse 3. Excellent in data ingestion (Batch and real-time processing) 4. Good understanding of synapse workspace and synapse analytics 5. Good hands-on experience on Pyspark or Scala spark 6. Good hands-on experience on Delta Lake and Spark streaming 7. Good Understanding of Azure DevOps and Azure Infrastructure concepts 8. Have at least one project end-to-end hands-on implementation experience as an architect 9. Expert and persuasive communication skills (verbal and written) 10. Expert in presentation and skilled at managing multiple clients. 11. Good at Python / Shell Scripting 12. Good to have Azure/any cloud certifications.
Posted 1 day ago
6.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
JOB DESCRIPTION: DATA ENGINEER (Databricks & AWS) Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential. Locations: Jaipur, Pune, Hyderabad, Bangalore, Noida. Responsibilities: • Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager • Build and maintain ETL/ELT pipelines for both batch and streaming data. • Work with structured and unstructured datasets at scale. • Apply Data Modeling principles and advanced SQL techniques. • Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats. • Collaborate with product teams to understand requirements and deliver optimized data solutions. • Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code. • Work independently with minimal supervision and strong ownership of deliverables. Must Have: • 6+ years of experience in Data Engineering on AWS Cloud. • Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift • Strong SQL skills and performance tuning experience on large datasets. • Good understanding of CI/CD pipelines, especially using DBX and AWS tools. • Experience with environment setup, cluster management, user roles, and authentication in Databricks. • Certified as a Databricks Certified Data Engineer – Professional (mandatory). Good To Have: • Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks. • Experience with Databricks ML or Spark 3.x upgrades. • Familiarity with Airflow, Step Functions, or other orchestration tools. • Experience integrating Databricks with AWS services in a secured, production-ready environment. • Experience with monitoring and cost optimization in AWS. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Delta Lake, Iceberg • Databricks on AWS • AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager • Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild • Other: Data Modeling, ETL Methodology, Performance Optimization
Posted 1 day ago
0 years
0 Lacs
India
Remote
Company Description ThreatXIntel is a startup cyber security company specializing in protecting businesses and organizations from cyber threats. Our tailored services include cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We prioritize delivering affordable solutions that cater to the specific needs of our clients, regardless of their size. Our proactive approach to security involves continuous monitoring and testing to identify vulnerabilities before they can be exploited. Role Description We are seeking an experienced GCP Data Engineer for a contract engagement focused on building, optimizing, and maintaining high-scale data processing pipelines using Google Cloud Platform services . You’ll work on designing robust ETL/ELT solutions, transforming large data sets, and enabling analytics for critical business functions. This role is ideal for a hands-on engineer with strong expertise in BigQuery , Cloud Composer (Airflow) , Python , and Cloud SQL/PostgreSQL , with experience in distributed data environments and orchestration tools. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL/ELT workflows using GCP Composer (Apache Airflow) Work with BigQuery , Cloud SQL , and PostgreSQL to manage and optimize data storage and retrieval Build automation scripts and data transformations using Python (PySpark knowledge is a strong plus) Optimize queries for large-scale, distributed data processing systems Collaborate with cross-functional teams to translate business and analytics requirements into scalable technical solutions Support data ingestion from multiple structured and semi-structured sources including Hive , MySQL , and NoSQL databases Apply HDFS and distributed file system experience where necessary Ensure data quality, reliability, and consistency across platforms Provide ongoing maintenance and support for deployed pipelines and services Required Qualifications Strong hands-on experience with GCP services , particularly: BigQuery Cloud Composer (Apache Airflow) Cloud SQL / PostgreSQL Proficiency in Python for scripting and data pipeline development Experience in designing & optimizing high-volume data processing workflows Good understanding of distributed systems , HDFS , and parallel processing frameworks Strong analytical and problem-solving skills Ability to work independently and collaborate across remote teams Excellent communication skills for technical and non-technical audiences Preferred Skills Knowledge of PySpark for big data processing Familiarity with Hive , MySQL , and NoSQL databases Experience with Java in a data engineering context Exposure to data governance, access control, and cost optimization on GCP Prior experience in a contract or freelance capacity with enterprise clients
Posted 1 day ago
5.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Spark/Python/Pentaho Developer Location: Pune, India Role Description Spark/Python/Pentaho Developer. Need to work on Data Integration project. Mostly batch oriented using Python/Pyspark/Pentaho. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Hands on Spark/Python/Pentaho programming experience Participating in agile development projects for batch data ingestion. Fast learner into order to understand the current data landscape and existing Python/Spark/Pentaho program to make enhancement. Stakeholder communication Contribute to all stages of software development lifecycle Analyze user requirements to define business objectives Define application objectives and functionality Develop and test software Identify and resolve any technical issues arising Create detailed design documentation Conducting software analysis, programming, testing, and debugging Software upgrades and maintenance Migration of Out of Support Application Software Your Skills And Experience Experience: Minimum 5-10 years Spark Python programming Pentaho Good in writing Hive HQL’s / SQLs Oracle database Java/Scala experience is a plus Expertise in unit testing. Know-how with cloud-based infrastructure How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 day ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Company Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. About The Role We are seeking an experienced Senior Data QA Analyst to support data integration, transformation, and reporting validation for enterprise-scale systems. This role involves close collaboration with data engineers, business analysts, and stakeholders to ensure the quality, accuracy, and reliability of data workflows, especially in Azure Data Bricks and ETL pipelines . Key Responsibilities Test Planning and Execution: Collaborate with Business Analysts and Data Engineers to understand requirements and translate them into test scenarios and test case Develop and execute comprehensive test plans and test scripts for data validation Log and manage defects using tools like Azure DevOps Support UAT and post-go-live smoke testing Data Integration Validation Understand data architecture and workflows, including ETL processes and data movement Write and execute complex SQL queries to validate data accuracy, completeness, and consistency Ensure correctness of data transformations and mappings based on business logic Report Testing Validate the structure, metrics, and content of BI reports Perform cross-checks of report outputs against source systems Ensure reports reflect accurate calculations and align with business requirements Required Skills & Experience Bachelor’s degree in IT, Computer Science, MIS, or related field 8+ years of experience in QA, especially in data validation or data warehouse testing Strong hands-on experience with SQL and data analysis Proven experience working with Azure Data Bricks, Python, and PySpark (preferred) Familiarity with data models like Data Marts, EDW, and Operational Data Stores Excellent understanding of data transformation, mapping logic, and BI validation Experience with test case documentation, defect tracking, and Agile methodologies Strong verbal and written communication skills, with the ability to work in a cross-functional environment Benefit And Perks Opportunity to work with leading global clients Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities Skills: etl,agile methodologies,test case design,agile,databricks,data integration,operational data stores,azure data bricks,test planning,sql,testing,edw,defect tracking,data validation,python,etl testing,pyspark,data analysis,data marts,test case documentation,data warehousing
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at Lifesight, you will play a crucial role in the Data and Business Intelligence organization by focusing on deep data engineering projects. Joining the data platform team in Bengaluru, you will have the opportunity to contribute to defining the technical strategy and data engineering team culture in India. Your responsibilities will include designing and constructing data platforms and services, as well as managing data infrastructure in cloud environments to support strategic business decisions across Lifesight products. You will be expected to build highly scalable distributed data processing systems, data solutions, and data pipelines that optimize data quality and are resilient to poor-quality data sources. Additionally, you will own data mapping, business logic, transformations, and data quality, while participating in architecture discussions, influencing the product roadmap, and taking ownership of new projects. The ideal candidate for this role should possess proficiency in Python and PySpark, a deep understanding of Apache Spark, experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, and Presto, and familiarity with distributed database systems. Experience working with various file formats like Parquet, Avro, and NoSQL databases, as well as AWS and GCP, is preferred. A minimum of 5 years of professional experience as a data or software engineer is required for this full-time position. If you are a self-starter who is passionate about data engineering, ready to work with big data technologies, and eager to collaborate with a team of engineers while mentoring others, we encourage you to apply for this exciting opportunity at Lifesight.,
Posted 1 day ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Data Engineer Location: Noida Experience: 3+ years Job Description: We are seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with a focus on PySpark, Python, and SQL. Experience with Azure Databricks is a plus. Key Responsibilities: Design, develop, and maintain scalable data pipelines and systems. Work closely with data scientists and analysts to ensure data quality and availability. Implement data integration and transformation processes using PySpark and Python. Optimize and maintain SQL databases and queries. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Monitor and troubleshoot data pipeline issues to ensure data integrity and performance. Required Skills and Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience in data engineering. Proficiency in PySpark, Python, and SQL. Experience with Azure Databricks is a plus. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Preferred Qualifications: Experience with cloud platforms such as Azure, AWS, or Google Cloud. Knowledge of data warehousing concepts and technologies. Familiarity with ETL tools and processes. How to Apply: Apart from Easy apply on Linkedin also Click on this link 🡪https://forms.office.com/r/N0nYycJ36P #DataEngineer #Hiring #JobOpening #PySpark #Python #SQL #AzureDatabricks #TechJobs #DataEngineering #CareerOpportunity
Posted 1 day ago
8.0 years
0 Lacs
India
Remote
Job Title: Quant Engineer Location: Remote Quant Engineer Job Description: Strong Python developer with up-to-date skills, including web development, cloud (ideally Azure), Docker, testing , devops (ideally terraform + github actions). Data engineering (pyspark, lakehouses, kafka) is a plus. Good understanding of maths, finance as role interacts with quant devs, analysts and traders. Familiarity with e.g. PnL, greeks, volatility, partial derivative, normal distribution etc. Financial and/or trading exposure is nice to have, particularly energy commodities Productionise quant models into software applications, ensuring robust day to day operation, monitoring and back testing are in place Translate trader or quant analyst’s need into software product requirements Prototype and implement data pipelines Co-ordinate closely with analysts and quants during development of models, acting as a technical support and coach Produce accurate, performant, scalable, secure software, and support best practices following defined IT standards Transform proof of concepts into a larger deployable product in Shell and outside. Work in a highly-collaborative, friendly Agile environment, participate in Ceremonies and Continuous Improvement activities. Ensuring that documentation and explanations of results of analysis or modelling are fit for purpose for both a technical and non-technical audience Mentor and coach other teammates who are upskilling in Quants Engineering Professional Qualifications & Skills Educational Qualification Graduation / postgraduation /PhD with 8+ years’ work experience as software developer /data scientist. Degree level in STEM, computer science, engineering, mathematics, or a relevant field of applied mathematics. Good understanding of Trading terminology and concepts (incl. financial derivatives), gained from experience working in a Trading or Finance environment. Required Skills Expert in core Python with Python scientific stack / ecosystem (incl pandas, numpy, scipy, stats), and a second strongly typed language (e.g.: C#, C++, Rust or Java). Expert in application design, security, release, testing and packaging. Mastery of SQL / no-SQL databases, data pipeline orchestration tools. Mastery of concurrent/distributed programming and performance optimisation methods
Posted 1 day ago
0.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Location: Bangalore - Karnataka, India - EOIZ Industrial Area Job Family: Artificial Intelligence & Machine Learning Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T3(B) Job ID: R-46427-2025 Description & Requirements Introduction: A Career at HARMAN - Harman Tech Solutions (HTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN HTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs. Empower the company to create new digital business models, enter new markets, and improve customer experiences Role Purpose: We are undertaking a Data Transformation at Lebara and as data becomes fundamental to what we are doing, the ability to have the right data in the right place at the right time, that is accurate and consistent is essential. The purpose of this job is to have a Senior Data Engineer resource that can participate in day-to-day QA activities like, testing sprint tickets, providing the updates on the assigned tickets, to improve your knowledge on the relevant domain. You need to be passionate enough to think & perform out of the box, to add innovation to improve your daily task and work FAST (Frequently discussed, Ambitious, Specific & Transparent). You will be the part of Lebara Data Lake Team. Need to have the clarity on the domain to understand the platform principles to achieve operational excellence for every change. Role Purpose: We are undertaking a Data Transformation at Lebara and as data becomes fundamental to what we are doing, the ability to have the right data in the right place at the right time, that is accurate and consistent is essential. The purpose of this job is to have a QA resource that can participate in day-to-day QA activities like, testing sprint tickets, providing the updates on the assigned tickets, to improve your knowledge on the relevant domain. You need to be passionate enough to think & perform out of the box, to add innovation to improve your daily task and work SMART (Specific, Measurable, Achievable, Relevant and Time bound). You will be the part of Lebara QA Team. Need to have the clarity on the domain to understand the testing requirement to test in an efficient manner. Role: Data Engineer Responsibilities: Designing and implementing data pipelines to collect, clean, and transform data from various sources Building and maintaining data storage and processing systems, such as databases, data warehouses, and data lakes Ensuring data is properly secured and protected Developing and implementing data governance policies and procedures Collaborating with data analysts, data scientists, and other stakeholders to understand their data needs and ensure they have access to the data they need Sharing knowledge with the wider business, working with other BA’s and technology teams to make sure processes and ways of working is documented. Collaborate with Big Data Solution Architects to design, prototype, implement, and optimize data ingestion pipelines so that data is shared effectively across various business systems. Ensure the design, code and procedural aspects of solution are production ready, in terms of operational, security and compliance standards. Participate in day-to-day project and agile meetings and provide technical support for faster resolution of issues. Clearly and concisely communicating to the business, on status of items and blockers. Have an end-to-end knowledge of the data landscape within Lebara. Skills & Experience: 8+ years of design & development experience with big data technologies Proficient in Python, PySpark, Azure Data Brickes, Kubernetes and Terraform 2+ years of development experience in cloud technologies like Azure, AWS or GCP Proficient in querying and manipulating data from various DB (relational and big data). Experience of writing effective and maintainable unit and integration tests for ingestion pipelines. Experience of using static analysis and code quality tools and building CI/CD pipelines. Excellent communication, problem-solving, and leadership skills, and be able to work well in a fast-paced, dynamic environment. Experience of working on high-traffic and large-scale Software Products Behavioural Fit: Technical with a keen eye for detail. Self-Driven, self-motivated and results oriented. Confident and an ability to challenge if necessary. Process driven, organised & well-structured Ability to work in a cross-functional, multi-cultural team and in a collaborative environment with minimal supervision. Ability to multi-task and plan, organize and prioritize multiple projects. Role Key Performance Indicators: Data Pipeline reliability & resiliency Data processing efficiency & Release quality Defect Analysis & Data Quality maintained with the development done Time to market & resource utilisation Business & Stakeholder satisfaction Educational Qualification Experience working in cross-functional teams and collaborating effectively with different stakeholders. Strong problem-solving and analytical skills. Excellent communication skills to document and present technical concepts clearly. Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. 5-8 years relevant and Proven experience You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today! Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity employer. HARMAN strives to hire the best qualified candidates and is committed to building a workforce representative of the diverse marketplaces and communities of our global colleagues and customers. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.HARMAN attracts, hires, and develops employees based on merit, qualifications and job-related performance.(www.harman.com)
Posted 1 day ago
3.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Testing/Quality Assurance Main location: India, Karnataka, Bangalore Position ID: J0725-1442 Employment Type: Full Time Position Description: Position Description Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: ETL Testing Engineer Position: Senior test engineer Experience: 3-9 Years Category: Quality assurance/Software Testing. Shift: 1-10 pm/UK Shift Main location: Chennai/Bangalore. Position ID: J0725-1442 Employment Type: Full Time Position Description: We are looking for an experienced DataStage tester to join our team. The ideal candidate should be passionate about coding and testing scalable and high-performance applications Your future duties and responsibilities: Develop and execute ETL test cases to validate data extraction, transformation, and loading processes. Write complex SQL queries to verify data integrity, consistency, and correctness across source and target systems. Automate ETL testing workflows using Python, PyTest, or other testing frameworks. Perform data reconciliation, schema validation, and data quality checks. Identify and report data anomalies, performance bottlenecks, and defects. Work closely with Data Engineers, Analysts, and Business Teams to understand data requirements. Design and maintain test data sets for validation. Implement CI/CD pipelines for automated ETL testing (Jenkins, GitLab CI, etc.). Document test results, defects, and validation reports. Required qualifications to be successful in this role: ETL Testing: Strong experience in testing Informatica, Talend, SSIS, Databricks, or similar ETL tools. SQL: Advanced SQL skills (joins, aggregations, subqueries, stored procedures). Python: Proficiency in Python for test automation (Pandas, PySpark, PyTest). Databases: Hands-on experience with RDBMS (Oracle, SQL Server, PostgreSQL) & NoSQL (MongoDB, Cassandra). Big Data Testing (Good to Have): Hadoop, Hive, Spark, Kafka. Testing Tools: Knowledge of Selenium, Airflow, Great Expectations, or similar frameworks. Version Control: Git, GitHub/GitLab. CI/CD: Jenkins, Azure DevOps, or similar. Soft Skills: Strong analytical and problem-solving skills. Ability to work in Agile/Scrum environments. Good communication skills for cross-functional collaboration. Preferred Qualifications: Experience with cloud platforms (AWS, Azure). Knowledge of Data Warehousing concepts (Star Schema, Snowflake Schema). Certification in ETL Testing, SQL, or Python is a plus. Skills: Data Warehousing MS SQL Server Python What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
Pune, Maharashtra
On-site
DataPune Posted On 31 Jul 2025 End Date 31 Dec 2025 Required Experience 4 - 8 Years Basic Section Grade Role Senior Software Engineer Employment Type Full Time Employee Category Organisational Group Company NewVision Company Name New Vision Softcom & Consultancy Pvt. Ltd Function Business Units (BU) Department/Practice Data Department/Practice Data Engineering Region APAC Country India Base Office Location Pune Working Model Hybrid Weekly Off Pune Office Standard State Maharashtra Skills Skill AZURE DATABRICKS Highest Education GRADUATION/EQUIVALENT COURSE CERTIFICATION DP-201: DESIGNING AN AZURE DATA SOLUTION DP-203T00: DATA ENGINEERING ON MICROSOFT AZURE Working Language ENGLISH Job Description Position Summary: We are seeking a talented Databricks Data Engineer with a strong background in data engineering to join our team. You will play a key role in designing, building, and maintaining data pipelines using a variety of technologies, with a focus on the Microsoft Azure cloud platform. Responsibilities: Design, develop, and implement data pipelines using Azure Data Factory (ADF) or other orchestration tools. Write efficient SQL queries to extract, transform, and load (ETL) data from various sources into Azure Synapse Analytics. Utilize PySpark and Python for complex data processing tasks on large datasets within Azure Databricks. Collaborate with data analysts to understand data requirements and ensure data quality. Hands-on experience in designing and developing Datalakes and Warehouses Implement data governance practices to ensure data security and compliance. Monitor and maintain data pipelines for optimal performance and troubleshoot any issues. Develop and maintain unit tests for data pipeline code. Work collaboratively with other engineers and data professionals in an Agile development environment. Preferred Skills & Experience: Good knowledge of PySpark & working knowledge of Python Full stack Azure Data Engineering skills (Azure Data Factory, DataBricks and Synapse Analytics) Experience with large dataset handling Hands-on experience in designing and developing Datalakes and Warehouses New Vision is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough