Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
15 - 30 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
15 - 30 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
15 - 30 Lacs
Kochi, Kerala, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
15 - 30 Lacs
Greater Bhopal Area
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
15 - 30 Lacs
Indore, Madhya Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
15 - 30 Lacs
Visakhapatnam, Andhra Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
15 - 30 Lacs
Chandigarh, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
15 - 30 Lacs
Dehradun, Uttarakhand, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
15 - 30 Lacs
Mysore, Karnataka, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
15 - 30 Lacs
Thiruvananthapuram, Kerala, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
15 - 30 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
4.0 years
15 - 30 Lacs
Patna, Bihar, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Specialty Development Senior 34263 Location: Chennai Employment Type: Full-Time (Hybrid) Job Overview We are looking for an experienced GCP Data Engineer to join a global data engineering team responsible for building a sophisticated data warehouse and analytics platform on Google Cloud Platform (GCP) . This role is ideal for professionals with a strong background in data engineering, cloud migration, and large-scale data transformation , particularly within cloud-native environments. Key Responsibilities Design, build, and optimize data pipelines on GCP to support large-scale data transformations and analytics. Lead the migration and modernization of legacy systems to cloud-based architecture. Collaborate with cross-functional global teams to support data-driven applications and enterprise analytics solutions. Work with large datasets to enable platform capabilities and business insights using GCP tools. Ensure data quality, integrity, and performance across the end-to-end data lifecycle. Apply agile development principles to rapidly deliver and iterate on data solutions. Promote engineering best practices in CI/CD, DevSecOps, and cloud deployment strategies. Must-Have Skills GCP Services: BigQuery, Dataflow, Dataproc, Data Fusion, Cloud Composer, Cloud Functions, Cloud SQL, Cloud Spanner, Cloud Storage, Bigtable, Pub/Sub, App Engine, Compute Engine, Airflow Programming & Data Engineering: 5+ years in data engineering and SQL development; experience in building data warehouses and ETL processes Cloud Experience: Minimum 3 years in cloud environments (preferably GCP), implementing production-scale data solutions Strong understanding of data processing architectures (batch/real-time) and tools such as Terraform, Cloud Build, and Airflow Experience with containerized microservices architecture Excellent problem-solving skills and ability to optimize complex data pipelines Strong interpersonal and communication skills with the ability to work effectively in a globally distributed team Proven ability to work independently in high-ambiguity scenarios and drive solutions proactively Preferred Skills GCP Certification (e.g., Professional Data Engineer) Experience in regulated or financial domains Migration experience from Teradata to GCP Programming experience with Python, Java, Apache Beam Familiarity with data governance, security, and compliance in cloud environments Experience coaching and mentoring junior data engineers Knowledge of software architecture, CI/CD, source control (Git), and secure coding standards Exposure to Java full-stack development (Spring Boot, Microservices, React) Agile development experience including pair programming, TDD, and DevSecOps Proficiency in test automation tools like Selenium, Cucumber, REST Assured Familiarity with other cloud platforms like AWS or Azure is a plus Education Bachelor’s Degree in Computer Science, Information Technology, or a related field (mandatory) Skills: python,gcp certification,microservices architecture,terraform,airflow,data processing architectures,test automation tools,sql development,cloud environments,agile development,ci/cd,gcp services: bigquery, dataflow, dataproc, data fusion, cloud composer, cloud functions, cloud sql, cloud spanner, cloud storage, bigtable, pub/sub, app engine, compute engine, airflow,apache beam,git,communication,problem-solving,data engineering,analytics,data,data governance,etl processes,gcp,cloud build,java
Posted 2 days ago
4.0 years
0 Lacs
India
Remote
Greetings!!! Role:- Senior Data Modelling Engineer with GCP, SQL, cognos, ETL Experience:- 4+ Years Location- Remote Duration: 4 Months Contract Required Skills & Experience: - Extensive experience with SQL, including writing complex queries and optimizing database performance (Must have) ● Demonstrated expertise in data modeling techniques, including dimensional modeling, 3NF structures, and denormalized views (Must have) ● Hands-on experience with Google Cloud Platform (GCP) services, particularly those related to data storage, processing, and analytics (Must have) ● Experience with BigQuery, Cloud Dataflow, Dataplex, Dataform, and Cloud Pub/Sub (Must have) ● Basic knowledge of Cognos and Datastage. ● DBT knowledge would be an add-on. ● Strong background in building and maintaining data warehouses and data lakes ● Experience with ETL/ELT processes and big data technologies If you are interested , please share your resume to prachi@iitjobs.com
Posted 2 days ago
6.0 years
0 Lacs
Udaipur, Rajasthan, India
On-site
Role: Senior Data Engineer Experience: 4-6 Yrs Location: Udaipur , Jaipur Job Description: We are looking for a highly skilled and experienced Data Engineer with 4–6 years of hands-on experience in designing and implementing robust, scalable data pipelines and infrastructure. The ideal candidate will be proficient in SQL and Python and have a strong understanding of modern data engineering practices. You will play a key role in building and optimizing data systems, enabling data accessibility and analytics across the organization, and collaborating closely with cross-functional teams including Data Science, Product, and Engineering. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines using SQL and Python Collaborate with data analysts, data scientists, and product teams to understand data needs Optimize queries and data models for performance and reliability Integrate data from various sources, including APIs, internal databases, and third-party systems Monitor and troubleshoot data pipelines to ensure data quality and integrity Document processes, data flows, and system architecture Participate in code reviews and contribute to a culture of continuous improvement Required Skills: 4–6 years of experience in data engineering, data architecture, or backend development with a focus on data Strong command of SQL for data transformation and performance tuning Experience with Python (e.g., pandas, Spark, ADF) Solid understanding of ETL/ELT processes and data pipeline orchestration Proficiency with RDBMS (e.g., PostgreSQL, MySQL, SQL Server) Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) Familiarity with version control (Git), CI/CD workflows, and containerized environments (Docker, Kubernetes) Basic Programming Skills Excellent problem-solving skills and a passion for clean, efficient data systems Preferred Skills: Experience with cloud platforms (AWS, Azure, GCP) and services like S3, Glue, Dataflow, etc. Exposure to enterprise solutions (e.g., Databricks, Synapse) Knowledge of big data technologies (e.g., Spark, Kafka, Hadoop) Background in real-time data streaming and event-driven architectures Understanding of data governance, security, and compliance best practices Prior experience working in agile development environment Educational Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be responsible for leading the delivery of complex solutions by coding larger features from start to finish. Actively participating in planning, performing code and architecture reviews of your team's product will be a crucial aspect of your role. You will help ensure the quality and integrity of the Software Development Life Cycle (SDLC) for your team by identifying opportunities for improvement in how the team works, through the usage of recommended tools and practices. Additionally, you will lead the triage of complex production issues across systems and demonstrate creativity and initiative in solving complex problems. As a high performer, you will consistently deliver a high volume of story points relative to your team. Being aware of the technology landscape, you will plan the delivery of coarse-grained business needs spanning multiple applications. You will also influence technical peers outside your team and set a consistent example of agile development practices. Coaching other engineers to work as a team with Product and UX will be part of your responsibilities. Furthermore, you will create and enhance internal libraries and tools, provide technical leadership on the product, and determine the technical approach. Proactively communicating status and issues to your manager, collaborating with other teams to find creative solutions to customer issues, and showing a commitment to delivery deadlines, especially seasonal and vendor partner deadlines that are critical to Best Buy's continued success, will be essential. Basic Qualifications: - 5+ years of relevant technical professional experience with a bachelor's degree OR equivalent professional experience. - 2+ years of experience with Google Cloud services including Dataflow, Bigquery, Looker. - 1+ years of experience with Adobe Analytics, Content Square, or similar technologies. - Hands-on experience with data engineering and visualization tools like SQL, Airflow, DBT, PowerBI, Tableau, and Looker. - Strong understanding of real-time data processing and issue detection. - Expertise in data architecture, database design, data quality standards/implementation, and data modeling. Preferred Qualifications: - Experience working in an omni-channel retail environment. - Experience connecting technical issues with business performance metrics. - Experience with Forsta or similar customer feedback systems. - Certification in Google Cloud Platform services. - Good understanding of data governance, data privacy laws & regulations, and best practices. About Best Buy: BBY India is a service provider to Best Buy, and as part of the team working on Best Buy projects and initiatives, you will help fulfill Best Buy's purpose to enrich lives through technology. Every day, you will humanize and personalize tech solutions for every stage of life in Best Buy stores, online, and in Best Buy customers" homes. Best Buy is a place where techies can make technology more meaningful in the lives of millions of people, enabling the purpose of enriching lives through technology. The unique culture at Best Buy unleashes the power of its people and provides fast-moving, collaborative, and inclusive experiences that empower employees of all backgrounds to make a difference, learn, and grow every day. Best Buy's culture is built on deeply supporting and valuing its amazing employees and other team members. Best Buy is committed to being a great place to work, where you can unlock unique career possibilities. Above all, Best Buy aims to provide a place where people can bring their full, authentic selves to work now and into the future. Tomorrow works here.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Technology Service Specialist, AVP at our Pune location, you will be an integral part of the Technology, Data, and Innovation (TDI) Private Bank team. In this role, you will be responsible for providing 2nd Level Application Support for business applications used in branches, by mobile sales, or via the internet. Your expertise in Incident Management and Problem Management will be crucial in ensuring the stability of these applications. Partnerdata, the central client reference data system in Germany, is a core banking system that integrates many banking processes and applications through numerous interfaces. With the recent migration to Google Cloud (GCP), you will be involved in operating and further developing applications and functionalities on the cloud platform. Your focus will also extend to regulatory topics surrounding partner/client relationships. We are seeking individuals who can contribute to this contemporary and emerging Cloud application area. Key Responsibilities: - Ensure optimum service level to supported business lines - Oversee resolution of incidents and problems within the team - Assist in managing business stakeholder relationships - Define and manage OLAs with relevant stakeholders - Monitor team performance, adherence to processes, and alignment with business SLAs - Manage escalations and work with relevant functions to resolve issues quickly - Identify areas for improvement and implement best practices in your area of expertise - Mentor and coach Production Management Analysts within the team - Fulfill Service Requests, communicate with Service Desk function, and participate in major incident calls - Document tasks, incidents, problems, changes, and knowledge bases - Improve monitoring of applications and implement automation of tasks Skills and Experience: - Service Operations Specialist experience in a global operations context - Extensive experience supporting complex application and infrastructure domains - Ability to manage and mentor Service Operations teams - Strong ITIL/best practice service context knowledge - Proficiency in interface technologies, communication protocols, and ITSM tools - Bachelor's Degree in IT or Computer Science related discipline - ITIL certification and experience with ITSM tool ServiceNow preferred - Knowledge of Banking domain and regulatory topics - Experience with databases like BigQuery and understanding of Big Data and GCP technologies - Proficiency in tools like GitHub, Terraform, Cloud SQL, Cloud Storage, Dataproc, Dataflow - Architectural skills for big data solutions and interface architecture Area-Specific Tasks/Responsibilities: - Handle Incident/Problem Management and Service Request Fulfilment - Analyze and resolve incidents escalated from 1st Level Support - Support the resolution of high-impact incidents and escalate when necessary - Provide solutions for open problems and support service transition for new projects/applications Joining our team, you will receive training, development opportunities, coaching from experts, and a culture of continuous learning to support your career progression. We value diversity and promote a positive, fair, and inclusive work environment at Deutsche Bank Group. Visit our company website for more information.,
Posted 2 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving discussions, contribute to the overall project strategy, and continuously refine your skills to enhance application performance and user experience. Roles & Responsibilities: The Offshore Data Engineer plays a critical role in designing, building, and maintaining scalable data pipelines and infrastructure to support business intelligence, analytics, and machine learning initiatives. Working closely with onshore data architects and analysts, this role ensures high data quality, performance, and reliability across distributed systems. The engineer is expected to demonstrate technical proficiency, proactive problem-solving, and strong collaboration in a remote environment. -Design and develop robust ETL/ELT pipelines to ingest, transform, and load data from diverse sources. -Collaborate with onshore teams to understand business requirements and translate them into scalable data solutions. -Optimize data workflows through automation, parallel processing, and performance tuning. -Maintain and enhance data infrastructure including data lakes, data warehouses, and cloud platforms (AWS, Azure, GCP). -Ensure data integrity and consistency through validation, monitoring, and exception handling. -Contribute to data modeling efforts for both transactional and analytical use cases. -Deliver clean, well-documented datasets for reporting, analytics, and machine learning. -Proactively identify opportunities for cost optimization, governance, and process automation. Professional & Technical Skills: - Programming & Scripting: Proficiency in Databricks with SQL and Python for data manipulation and pipeline development. - Big Data Technologies: Experience with Spark, Hadoop, or similar distributed processing frameworks. -Workflow Orchestration: Hands-on experience with Airflow or equivalent scheduling tools. -Cloud Platforms: Strong working knowledge of cloud-native services (AWS Glue, Azure Data Factory, GCP Dataflow). -Data Modeling: Ability to design normalized and denormalized schemas for various use cases. -ETL/ELT Development: Proven experience in building scalable and maintainable data pipelines. -Monitoring & Validation: Familiarity with data quality frameworks and exception handling mechanisms. Good To have Skills -DevOps & CI/CD: Exposure to containerization (Docker), version control (Git), and deployment pipelines. -Data Governance: Understanding of metadata management, lineage tracking, and compliance standards. -Visualization Tools: Basic knowledge of BI tools like Power BI, Tableau, or Looker. -Machine Learning Support: Experience preparing datasets for ML models and feature engineering. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required., 15 years full time education
Posted 2 days ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
2-4 years of experience using Microsoft SQL Server (version 2008 or later). Ability to create and maintain complex T-SQL queries, views, and stored procedures. 0 -1+ year experience performing advanced ETL development including various dataflow transformation tasks. Ability to monitor the performance and improve the performance by optimizing the code and by creating indexes. Proficient with Microsoft Access and Microsoft Excel Knowledge of descriptive statistical modeling methodologies and techniques such as classification, regression, and association activities to support statistical analysis in various healthcare data. Strong knowledge of Data Warehousing concepts Strong written, verbal and Customer service skills Proficiency in compiling data, creating reports and presenting information, including expertise with query, MS Excel and / or other such product like SSRS, Tableau, PowerBI, etc Proficiency on various data forms including but not limited to star and snowflake schemas. Ability to translate business needs into practical applications Desire to work within a fast-paced environment Ability to work in a team environment and be flexible in taking on various projects
Posted 2 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Software Engineer Consultant/Expert – GCP Data Engineer Location: Chennai (Onsite) 34350 Employment Type: Contract Budget: Up to ₹18 LPA Assessment: Google Cloud Platform Engineer (HackerRank or equivalent) Notice Period: Immediate Joiners Preferred Role Summary We are seeking a highly skilled GCP Data Engineer to support the modernization of enterprise data platforms. The ideal candidate will be responsible for designing and implementing scalable, high-performance data pipelines and solutions on Google Cloud Platform (GCP) . You will work with large-scale datasets, integrating legacy and modern systems to enable advanced analytics and AI/ML capabilities. The role requires a deep understanding of GCP services, strong data engineering skills, and the ability to collaborate across teams to deliver robust data solutions. Key Responsibilities Design and develop production-grade data engineering solutions using GCP services such as: BigQuery, Dataflow, Dataform, Dataproc, Cloud Composer, Cloud SQL, Airflow, Compute Engine, Cloud Functions, Cloud Run, Cloud Build, Pub/Sub, App Engine Develop batch and real-time streaming pipelines for data ingestion, transformation, and processing. Integrate data from multiple sources including legacy and cloud-based systems. Collaborate with stakeholders and product teams to gather data requirements and align technical solutions to business needs. Conduct in-depth data analysis and impact assessments for data migrations and transformations. Implement CI/CD pipelines using tools like Tekton, Terraform, and GitHub. Optimize data workflows for performance, scalability, and cost-effectiveness. Lead and mentor junior engineers; contribute to knowledge sharing and documentation. Champion data governance, data quality, security, and compliance best practices. Utilize monitoring/logging tools to proactively address system issues. Deliver high-quality code using Agile methodologies including TDD and pair programming. Required Skills & Experience GCP Data Engineer Certification. Minimum 5+ years of experience designing and implementing complex data pipelines. 3+ years of hands-on experience with GCP. Strong expertise in: SQL, Python, Java, or Apache Beam Airflow, Dataflow, Dataproc, Dataform, Data Fusion, BigQuery, Cloud SQL, Pub/Sub Infrastructure-as-Code tools such as Terraform DevOps tools: GitHub, Tekton, Docker Solid understanding of microservice architecture, CI/CD integration, and container orchestration. Experience with data security, governance, and compliance in cloud environments. Preferred Qualifications Experience with real-time data streaming using Apache Kafka or Pub/Sub. Exposure to AI/ML tools or integration with AI/ML pipelines. Working knowledge of data science principles applied on large datasets. Experience in a regulated domain (e.g., financial services or insurance). Experience with project management and agile tools (e.g., JIRA, Confluence). Strong analytical and problem-solving mindset. Effective communication skills and ability to collaborate with cross-functional teams. Education Required: Bachelor's degree in Computer Science, Engineering, or a related technical field. Preferred: Master's degree or certifications in relevant domains. Skills: github,bigquery,airflow,ml,pub/sub,terraform,python,apache beam,dataflow,gcp,gcp data engineer certification,tekton,java,dataform,docker,data fusion,sql,dataproc,cloud sql,cloud
Posted 2 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Software Engineer – Senior (Full Stack Backend – Java) Location: Chennai (Onsite) Employment Type: Contract Budget: Up to ₹22 LPA 34347 Assessment: Full Stack Backend – Java (via HackerRank or equivalent platform) Notice Period: Immediate Joiners Preferred Role Overview We are seeking a highly skilled Senior Software Engineer with expertise in backend development, microservices architecture, and cloud-native technologies. The selected candidate will be part of a collaborative product team responsible for developing and deploying REST APIs and microservices for digital platforms. The role involves working in a fast-paced agile environment, contributing to both engineering excellence and product innovation. Key Responsibilities Design, develop, test, and deploy high-quality, scalable backend systems and APIs. Collaborate with cross-functional teams including product managers, designers, and QA engineers to deliver customer-centric solutions. Write clean, maintainable, and well-documented code following industry best practices. Participate in pair programming, code reviews, and test-driven development. Contribute to defining architecture and service-level objectives. Conduct proof-of-concepts for new capabilities and features. Drive continuous improvement in code quality, testing, and deployment processes. Required Skills 7+ years of hands-on experience in software engineering with a focus on backend development or full-stack engineering. Strong expertise in Java and microservices architecture. Solid understanding and working knowledge of: Google Cloud Platform (GCP) services including BigQuery, Dataflow, Dataproc, Data Fusion, Cloud SQL, and Airflow. Infrastructure as Code (IaC) tools like Terraform. CI/CD tools such as Tekton. Databases: PostgreSQL, Cloud SQL. Programming/scripting: Python, PySpark. Building and consuming RESTful APIs. Preferred Qualifications Experience with containerization and orchestration tools. Familiarity with monitoring tools and service-level indicators (SLIs/SLAs). Exposure to agile frameworks like Extreme Programming (XP), Scrum, or Kanban. Education Required: Bachelor's degree in Computer Science, Engineering, or a related technical discipline. Skills: restful apis,pyspark,tekton,data fusion,bigquery,cloud sql,microservices architecture,microservices,software,terraform,postgresql,dataflow,code,cloud,dataproc,google cloud platform (gcp),ci/cd,airflow,python,java
Posted 2 days ago
3.0 years
6 - 7 Lacs
Delhi
On-site
Job Title: Nursing Assistant (Male/Female) Location: Oman Joining: Immediate (within 15 days) Salary: Up to OMR 275/month Key Responsibilities: Provide direct patient care under the supervision of Registered Nurses. Assist patients with daily living activities including hygiene, feeding, and mobility. Take and record vital signs, monitor patient conditions, and report changes. Ensure patient comfort and safety at all times. Support clinical staff in carrying out medical procedures and routine tasks. Maintain accurate documentation and patient records. Follow infection control and hygiene protocols strictly. Requirements: Qualification: GNM (General Nursing and Midwifery) or BSc in Nursing. Experience: Minimum 3 years of relevant hospital/clinical experience. Mandatory: Positive Dataflow report. Gender: Male candidates only. Readiness to join within 15 days. Benefits: Competitive salary up to OMR 275/- Accommodation and transportation as per company norms. Medical insurance and other statutory benefits provided. Other benefits: Free Joining Ticket (Will be reimbursed after the 3 months Probation period) 30 Days paid Annual leave after 1 year of service completion Yearly Up and Down Air Ticket Medical Insurance Life Insurance Accommodation (Chargeable upto OMR 20/-) Note: This is an urgent requirement . Only candidates who can join immediately or within 15 days and have a Positive Dataflow report will be considered. Job Types: Full-time, Permanent Pay: ₹50,000.00 - ₹60,000.00 per month Benefits: Cell phone reimbursement Health insurance Internet reimbursement Leave encashment Life insurance Paid sick time Paid time off Provident Fund Schedule: Monday to Friday Rotational shift Supplemental Pay: Joining bonus Overtime pay Performance bonus Yearly bonus Experience: Nursing: 3 years (Required) Positive dataflow reports: 2 years (Required) Work Location: In person
Posted 2 days ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Design and develop robust ETL pipelines using Python, PySpark, and GCP services. Build and optimize data models and queries in BigQuery for analytics and reporting. Ingest, transform, and load structured and semi-structured data from various sources. Collaborate with data analysts, scientists, and business teams to understand data requirements. Ensure data quality, integrity, and security across cloud-based data platforms. Monitor and troubleshoot data workflows and performance issues. Automate data validation and transformation processes using scripting and orchestration tools. Required Skills & Qualifications Hands-on experience with Google Cloud Platform (GCP), especially BigQuery. Strong programming skills in Python and/or PySpark. Experience in designing and implementing ETL workflows and data pipelines. Proficiency in SQL and data modeling for analytics. Familiarity with GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Composer. Understanding of data governance, security, and compliance in cloud environments. Experience with version control (Git) and agile development practices.
Posted 2 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You’ll Do "EIIC functional excellence organization is aligned with CTO’s strategy to drive “One Eaton Engineering Functional Excellence”. The Charter of this organization is to simplify & create better work experiences for our Engineers by transforming existing engineering work processes. EIIC functional excellence organization will work with global Engineering Functional Excellence leaders in CTO’s office, Electrical, and Industrial Sector businesses. These organizations will be responsible for developing and deploying One Eaton processes across all sectors and businesses across the globe. As Senior Data Analyst and Automation engineer, you will be responsible for understanding the critical problem statements and find unique end-to-end solutions using Big Data Analytics and Automation expertise. You will also be responsible for establishing and deploying standard practices and processes for Process Automation, Big Data Analytics, Dashboards, and reporting and drive continuous improvement on these processes." "Primary Responsibility : Works with the various internal and external customers and Gathers and prioritizes customer needs and translates them into actionable requirements. Communicate insights to stakeholders, enabling data-driven decision-making across the organization Develop apps in Workshop, perform ETL process in Palantir, and develop meaningfull insights from the data. Select the appropriate programming languages, tools, and frameworks considering factors like scalability, performance, and security. Establish coding standards and best practices to ensure the code is maintainable and efficient. Organize & assemble information from diverse data sources in such a manner that the data aggregation is easily replicable and maintainable. Proficiently identify and apply the appropriate data analytics algorithm and come with recommendations based on the insights generated. Report out results in the form of various dashboards reporting measurement against targets, historical data trends and data snapshots supporting the end customers data requirements. Strategizes new uses for data and its interaction with data design. Manage multiple projects and deliver results on time and with the requisite quality Strive to get internally and externally recognized in this area by continuously learning and developing project management standard works and dashboard reporting. Knowledge of Engineering and Program management data sets including SAP or Oracle datasets will be recommended. Knowledge of SCM would be added advantage Qualifications Required: Bachelor’s Degree in Computer/Electrical Engineering with 2-5 Yrs experience. Strong understanding of organizational processes Skills " Professional experience in database management, data solution development, data transformation, and data quality assurance. Proficiency in using Palantir tools, including Code repository, Ontology manager, Object view, Workshop (dashboard, action Forms), and Data Connection. Knowledge of PowerBi, ETL process, RLS, and Dataflow would be an added advantage. Strong hands-on experience with Python and PySpark, demonstrating the ability to write, debug, and optimize code for data analysis and transformation. Competence in analyzing data and efficiently troubleshooting issues using PySpark and SQL. Familiarity with Data Ingestion, including data loading expertise with Oracle databases, SharePoint, and API calls. Comfortable working in Agile development methodologies, adapting to changing project requirements and priorities. Effective verbal and written communication skills to collaborate with team members and stakeholders. Capability to adhere to development best practices, including maintaining code standards, unit testing, integration testing, and quality assurance processes. Primary Skills Palantir tools Python/Pyspark Database Management Secondary Skills Excellent verbal and written communication and interpersonal skills Ability to work independently and within a team environment" " Process Management-Good at figuring out the processes necessary to get things done, knowing how to organize people and activities, knowing what to measure and how to measure it, Can simplify complex processes, Gets more out of fewer resources Problem Solving - Uses rigorous logic and methods to solve difficult problems with effective solutions, probes all fruitful sources for answers, can see hidden problems, Is excellent at honest analysis Looks beyond the obvious, and doesn't stop at the first answers Decision quality – makes good decisions based upon a mixture of analysis, wisdom, experience, and judgment Drive for results – Critical thinking: Critical thinking is the ability to analyze a situation and make a decision based on the information you have. As an automation engineer, you may be required to make decisions about how to best implement automation processes. Having strong critical thinking skills can help you make the best decision for your company. Critical thinking: Critical thinking is the ability to analyze a situation and make a decision based on the information you have. As an automation engineer, you may be required to make decisions about how to best implement automation processes. Having strong critical thinking skills can help you make the best decision for your company Communication: Communication is an essential skill for automation engineers, as they often work with other engineers and other professionals in other departments. Effective communication can help you collaborate with others, share ideas and explain technical concepts. can be counted on to exceed goals successfully Interpersonal savvy – relates well to all kinds of people; builds appropriate rapport."
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are seeking experienced and talented engineers to join our team. Your main responsibilities will include designing, building, and maintaining the software that drives the global logistics industry. WiseTech Global is a leading provider of software for the logistics sector, facilitating connectivity for major companies like DHL and FedEx within their supply chains. Our organization is product and engineer-focused, with a strong commitment to enhancing the functionality and quality of our software through continuous innovation. Our primary Research and Development center in Bangalore plays a pivotal role in our growth strategies and product development roadmap. As a Lead Software Engineer, you will serve as a mentor, a leader, and an expert in your field. You should be adept at effective communication with senior management while also being hands-on with the code to deliver effective solutions. The technical environment you will work in includes technologies such as C#, Java, C++, Python, Scala, Spring, Spring Boot, Apache Spark, Hadoop, Hive, Delta Lake, Kafka, Debezium, GKE (Kubernetes Engine), Composer (Airflow), DataProc, DataStreams, DataFlow, MySQL RDBMS, MongoDB NoSQL (Atlas), UIPath, Helm, Flyway, Sterling, EDI, Redis, Elastic Search, Grafana Dashboard, and Docker. Before applying, please note that WiseTech Global may engage external service providers to assess applications. By submitting your application and personal information, you agree to WiseTech Global sharing this data with external service providers who will handle it confidentially in compliance with privacy and data protection laws.,
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough