Jobs
Interviews

5747 Airflow Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

15 - 30 Lacs

Kochi, Kerala, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

15 - 30 Lacs

Greater Bhopal Area

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

15 - 30 Lacs

Indore, Madhya Pradesh, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

15 - 30 Lacs

Visakhapatnam, Andhra Pradesh, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

15 - 30 Lacs

Chandigarh, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

15 - 30 Lacs

Dehradun, Uttarakhand, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

15 - 30 Lacs

Mysore, Karnataka, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

15 - 30 Lacs

Thiruvananthapuram, Kerala, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

15 - 30 Lacs

Vijayawada, Andhra Pradesh, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

15 - 30 Lacs

Patna, Bihar, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Specialty Development Senior 34263 Location: Chennai Employment Type: Full-Time (Hybrid) Job Overview We are looking for an experienced GCP Data Engineer to join a global data engineering team responsible for building a sophisticated data warehouse and analytics platform on Google Cloud Platform (GCP) . This role is ideal for professionals with a strong background in data engineering, cloud migration, and large-scale data transformation , particularly within cloud-native environments. Key Responsibilities Design, build, and optimize data pipelines on GCP to support large-scale data transformations and analytics. Lead the migration and modernization of legacy systems to cloud-based architecture. Collaborate with cross-functional global teams to support data-driven applications and enterprise analytics solutions. Work with large datasets to enable platform capabilities and business insights using GCP tools. Ensure data quality, integrity, and performance across the end-to-end data lifecycle. Apply agile development principles to rapidly deliver and iterate on data solutions. Promote engineering best practices in CI/CD, DevSecOps, and cloud deployment strategies. Must-Have Skills GCP Services: BigQuery, Dataflow, Dataproc, Data Fusion, Cloud Composer, Cloud Functions, Cloud SQL, Cloud Spanner, Cloud Storage, Bigtable, Pub/Sub, App Engine, Compute Engine, Airflow Programming & Data Engineering: 5+ years in data engineering and SQL development; experience in building data warehouses and ETL processes Cloud Experience: Minimum 3 years in cloud environments (preferably GCP), implementing production-scale data solutions Strong understanding of data processing architectures (batch/real-time) and tools such as Terraform, Cloud Build, and Airflow Experience with containerized microservices architecture Excellent problem-solving skills and ability to optimize complex data pipelines Strong interpersonal and communication skills with the ability to work effectively in a globally distributed team Proven ability to work independently in high-ambiguity scenarios and drive solutions proactively Preferred Skills GCP Certification (e.g., Professional Data Engineer) Experience in regulated or financial domains Migration experience from Teradata to GCP Programming experience with Python, Java, Apache Beam Familiarity with data governance, security, and compliance in cloud environments Experience coaching and mentoring junior data engineers Knowledge of software architecture, CI/CD, source control (Git), and secure coding standards Exposure to Java full-stack development (Spring Boot, Microservices, React) Agile development experience including pair programming, TDD, and DevSecOps Proficiency in test automation tools like Selenium, Cucumber, REST Assured Familiarity with other cloud platforms like AWS or Azure is a plus Education Bachelor’s Degree in Computer Science, Information Technology, or a related field (mandatory) Skills: python,gcp certification,microservices architecture,terraform,airflow,data processing architectures,test automation tools,sql development,cloud environments,agile development,ci/cd,gcp services: bigquery, dataflow, dataproc, data fusion, cloud composer, cloud functions, cloud sql, cloud spanner, cloud storage, bigtable, pub/sub, app engine, compute engine, airflow,apache beam,git,communication,problem-solving,data engineering,analytics,data,data governance,etl processes,gcp,cloud build,java

Posted 3 days ago

Apply

4.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job ID: Pyt-ETP-Ban-1095 Location: Pune Company Overview Bridgenext is a Global consulting company that provides technology-empowered business solutions for world-class organizations. Our Global Workforce of over 800 consultants provide best in class services to our clients to realize their digital transformation journey. Our clients span the emerging, mid-market and enterprise space. With multiple offices worldwide, we are uniquely positioned to deliver digital solutions to our clients leveraging Microsoft, Java and Open Source with a focus on Mobility, Cloud, Data Engineering and Intelligent Automation. Emtec’s singular mission is to create “Clients for Life” – long-term relationships that deliver rapid, meaningful, and lasting business value. At Bridgenext, we have a unique blend of Corporate and Entrepreneurial cultures. This is where you would have an opportunity to drive business value for clients while you innovate and continue to grow and have fun while doing it. You would work with team members who are vibrant, smart and passionate and they bring their passion to all that they do – whether it’s learning, giving back to our communities or always going the extra mile for our client. Position Description We are looking for members with hands-on Data Engineering experience who will work on the internal and customer-based projects for Bridgenext. We are looking for someone who cares about the quality of code and who is passionate about providing the best solution to meet the client needs and anticipate their future needs based on an understanding of the market. Someone who worked on Hadoop projects including processing and data representation using various AWS Services. Must Have Skills 4-8 years of overall experience Strong programming experience with Python and ability to write modular code following best practices in python which is backed by unit tests with high degree of coverage. Knowledge of source control(Git/Gitlabs) Understanding of deployment patterns along with knowledge of CI/CD and build tools Knowledge of Kubernetes concepts and commands is a must Knowledge of monitoring and alerting tools like Grafana, Open telemetry is a must Knowledge of Astro/Airflow is plus Knowledge of data governance is a plus Experience with Cloud providers, preferably AWS Experience with PySpark, Snowflake and DBT good to have. Professional Skills Solid written, verbal, and presentation communication skills Strong team and individual player Maintains composure during all types of situations and is collaborative by nature High standards of professionalism, consistently producing high-quality results Self-sufficient, independent requiring very little supervision or intervention Demonstrate flexibility and openness to bring creative solutions to address issues

Posted 3 days ago

Apply

10.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

What Gramener offers you Gramener will offer you an inviting workplace, talented colleagues from diverse backgrounds, career path, steady growth prospects with great scope to innovate. Our goal is to create an ecosystem of easily configurable data applications focused on storytelling for public and private use Cloud Lead – Analytics & Data Products We’re looking for a Cloud Architect/Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Roles and Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Skills and Qualifications: 10-14 years of experience in cloud engineering, DevOps, or cloud architecture roles. Hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities Include Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 3 days ago

Apply

10.0 years

0 Lacs

Delhi, India

On-site

YOE : 10 YEARS TO 15 YEARS SKILLS REQUIRED : Java, python, HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto, HLD, LLD, SQL, NOSQL, MongoDB, etc PREFERENCE : Tier 1 college/universities Role & Responsibilities Lead and mentor a team of data engineers, ensuring high performance and career growth. Architect and optimize scalable data infrastructure, ensuring high availability and reliability. Drive the development and implementation of data governance frameworks and best practices. Work closely with cross-functional teams to define and execute a data roadmap. Optimize data processing workflows for performance and cost efficiency. Ensure data security, compliance, and quality across all data platforms. Foster a culture of innovation and technical excellence within the data team. Ideal Candidate Candidates from TIER 1 college preferred MUST have Experience in Product startups , and should have implemented Data Engineering systems from an early stage in the Company MUST have 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role. MUST have Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS. MUST be Proficiency in SQL, Python, and Scala for data processing and analytics. Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services. MUST have Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice MUST have Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks. Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.). Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB. Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK. Proven ability to drive technical strategy and align it with business objectives. Strong leadership, communication, and stakeholder management skills. Candidates from TIER 1 college preferred Preferred Qualifications: Experience in machine learning infrastructure or MLOps is a plus. Exposure to real-time data processing and analytics. Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. Prior experience in a SaaS or high-growth tech company.

Posted 3 days ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we’re shaping the future and making a meaningful impact on the world. About the Role We’re on a mission to completely change the way healthcare works by building the most powerful Healthcare Intelligence Platform (Gravity) ever made. Using an AI-first approach , our goal is to turn complicated health data into real-time insights that help hospitals, clinics, pharmaceutical companies, and researchers make faster, smarter decisions. We're building a unified platform from the ground up — specifically for healthcare . This platform will bring together everything from: Collecting data from different systems (Data Acquisition) Combining and cleaning it (Integration, Data Quality) Managing patient records and provider info (Master Data Management) Tagging and organizing it (Data Classification & Governance) Running analytics and building AI models (Analytics, AI Studio) Creating custom healthcare apps (App Marketplace) Using AI as a built-in assistant (AI as BI + Agent-first approach) This platform will let healthcare teams build solutions quickly — without starting from scratch each time. For example, they’ll be able to: Track and manage kidney disease patients across different hospitals Speed up clinical trials by analyzing real-world patient data Help pharmacies manage their stock better with predictive supply chain tools Detect early signs of diseases like diabetes or cancer with machine learning Ensure regulatory compliance automatically through built-in checks This is a huge, complex, and high-impact challenge , and we’re looking for a Staff Engineer to help lead the way. In this role, you’ll: Design and build scalable, secure, and reliable systems Create core features like data quality checks , metadata management , data lineage tracking , and privacy/compliance layers Work closely with other engineers, product managers, and healthcare experts to bring the platform to life If you're passionate about using technology to make a real difference in the world — and enjoy solving big engineering problems — we'd love to connect. A Day in the Life Architect, design, and build scalable data tools and frameworks. Collaborate with cross-functional teams to ensure data compliance, security, and usability. Lead initiatives around metadata management, data lineage, and data cataloging. Define and evangelize standards and best practices across data engineering teams. Own the end-to-end lifecycle of tooling – from prototyping to production deployment. Mentor and guide junior engineers and contribute to technical leadership across the organization. Drive innovation in privacy-by-design, regulatory compliance (e.g., HIPAA), and data observability solutions. What You Need 8+ years of experience in software engineering with strong experience building distributed systems. Proficient in backend development (Python, Java, or Scala or Go) and familiar with RESTful API design. Expertise in modern data stacks: Kafka, Spark, Airflow, Snowflake etc. Experience with open-source data governance frameworks like Apache Atlas, Amundsen, or DataHub is a big plus. Familiarity with cloud platforms (AWS, Azure, GCP) and their native data governance offerings. Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. Here’s What We Offer Generous Leaves: Enjoy generous leave benefits of up to 40 days. Parental Leave : Leverage one of industry's best parental leave policies to spend time with your new addition. Sabbatical : Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer comprehensive health insurance to support you and your family, covering medical expenses related to illness, disease, or injury. Extending support to the family members who matter most. Care Program: Whether it’s a celebration or a time of need, we’ve got you covered with care vouchers to mark major life events. Through our Care Vouchers program, employees receive thoughtful gestures for significant personal milestones and moments of need. Financial Assistance : Life happens, and when it does, we’re here to help. Our financial assistance policy offers support through salary advances and personal loans for genuine personal needs, ensuring help is there when you need it most. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure, extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube , Glassdoor , LinkedIn , Instagram , and the Web .

Posted 3 days ago

Apply

7.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We’re looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities  Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs.  Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions.  Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB.  Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA).  Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config.  Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements.  Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Requirements  7-10 years of experience in cloud engineering, DevOps, or cloud architecture roles.  Strong hands-on expertise with the AWS ecosystem and tools listed above.  Proficiency in scripting (e.g., Python, Bash) and infrastructure automation.  Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate.  Familiarity with data engineering and GenAI workflows is a plus.  AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred.

Posted 3 days ago

Apply

7.0 - 13.0 years

10 - 17 Lacs

Gurgaon, Haryana, India

On-site

About us -Coders Brain is a global leader in its services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers. We achieved our success because of how successfully we integrate with our clients. Quick Implementation - We offer quick implementation for the new onboarding client. Experienced Team - We've built an elite and diverse team that brings its unique blend of talent, expertise, and experience to make you more successful, ensuring our services are uniquely customized to your specific needs. One Stop Solution - Coders Brain provides end-to-end solutions for the businesses at an affordable price with uninterrupted and effortless services. Ease of Use - All of our products are user friendly and scalable across multiple platforms. Our dedicated team at Coders Brain implements keeping the interest of enterprise and users in mind. Secure - We understand and treat your security with utmost importance. Hence we blend security and scalability in our implementation considering long term impact on business benefit. Exp- 7+ Yrs Role- Data Engineer Location- Gurgaon Permanent-Codersbrain Technology Pvt Ltd Client:- EMB Global JobDescription Must Have: ? Excellent communication skills with the ability to interact directly with customers. ? Azure/AWS Databricks. ? Python / Scala / Spark / PySpark. ? Strong SQL and RDBMS expertise. ? HIVE / HBase / Impala / Parquet. ? Sqoop, Kafka, Flume. ? Airflow ? Jenkins or Bamboo. ? Github or Bitbucket. ? Nexus. Good to Have: ? Relevant accredited certifications for Azure, AWS, Cloud Engineering, and/or Databricks. ? Knowledge of Delta Live Tables (DLT). If you're interested then please share the below-mentioned details : oCurrent CTC: oExpected CTC: oCurrent Company: oNotice Period: oCurrent Location: oPreferred Location: oTotal-experience: oRelevant experience: oHighest qualification: oDOJ(If Offer in Hand from Other company):

Posted 3 days ago

Apply

6.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

JOB DESCRIPTION: DATA ENGINEER (Databricks & AWS) Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential. Locations: Jaipur, Pune, Hyderabad, Bangalore, Noida. Responsibilities: • Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager • Build and maintain ETL/ELT pipelines for both batch and streaming data. • Work with structured and unstructured datasets at scale. • Apply Data Modeling principles and advanced SQL techniques. • Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats. • Collaborate with product teams to understand requirements and deliver optimized data solutions. • Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code. • Work independently with minimal supervision and strong ownership of deliverables. Must Have: • 6+ years of experience in Data Engineering on AWS Cloud. • Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift • Strong SQL skills and performance tuning experience on large datasets. • Good understanding of CI/CD pipelines, especially using DBX and AWS tools. • Experience with environment setup, cluster management, user roles, and authentication in Databricks. • Certified as a Databricks Certified Data Engineer – Professional (mandatory). Good To Have: • Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks. • Experience with Databricks ML or Spark 3.x upgrades. • Familiarity with Airflow, Step Functions, or other orchestration tools. • Experience integrating Databricks with AWS services in a secured, production-ready environment. • Experience with monitoring and cost optimization in AWS. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Delta Lake, Iceberg • Databricks on AWS • AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager • Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild • Other: Data Modeling, ETL Methodology, Performance Optimization

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be responsible for leading the delivery of complex solutions by coding larger features from start to finish. Actively participating in planning, performing code and architecture reviews of your team's product will be a crucial aspect of your role. You will help ensure the quality and integrity of the Software Development Life Cycle (SDLC) for your team by identifying opportunities for improvement in how the team works, through the usage of recommended tools and practices. Additionally, you will lead the triage of complex production issues across systems and demonstrate creativity and initiative in solving complex problems. As a high performer, you will consistently deliver a high volume of story points relative to your team. Being aware of the technology landscape, you will plan the delivery of coarse-grained business needs spanning multiple applications. You will also influence technical peers outside your team and set a consistent example of agile development practices. Coaching other engineers to work as a team with Product and UX will be part of your responsibilities. Furthermore, you will create and enhance internal libraries and tools, provide technical leadership on the product, and determine the technical approach. Proactively communicating status and issues to your manager, collaborating with other teams to find creative solutions to customer issues, and showing a commitment to delivery deadlines, especially seasonal and vendor partner deadlines that are critical to Best Buy's continued success, will be essential. Basic Qualifications: - 5+ years of relevant technical professional experience with a bachelor's degree OR equivalent professional experience. - 2+ years of experience with Google Cloud services including Dataflow, Bigquery, Looker. - 1+ years of experience with Adobe Analytics, Content Square, or similar technologies. - Hands-on experience with data engineering and visualization tools like SQL, Airflow, DBT, PowerBI, Tableau, and Looker. - Strong understanding of real-time data processing and issue detection. - Expertise in data architecture, database design, data quality standards/implementation, and data modeling. Preferred Qualifications: - Experience working in an omni-channel retail environment. - Experience connecting technical issues with business performance metrics. - Experience with Forsta or similar customer feedback systems. - Certification in Google Cloud Platform services. - Good understanding of data governance, data privacy laws & regulations, and best practices. About Best Buy: BBY India is a service provider to Best Buy, and as part of the team working on Best Buy projects and initiatives, you will help fulfill Best Buy's purpose to enrich lives through technology. Every day, you will humanize and personalize tech solutions for every stage of life in Best Buy stores, online, and in Best Buy customers" homes. Best Buy is a place where techies can make technology more meaningful in the lives of millions of people, enabling the purpose of enriching lives through technology. The unique culture at Best Buy unleashes the power of its people and provides fast-moving, collaborative, and inclusive experiences that empower employees of all backgrounds to make a difference, learn, and grow every day. Best Buy's culture is built on deeply supporting and valuing its amazing employees and other team members. Best Buy is committed to being a great place to work, where you can unlock unique career possibilities. Above all, Best Buy aims to provide a place where people can bring their full, authentic selves to work now and into the future. Tomorrow works here.,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data Engineer at Lifesight, you will play a crucial role in the Data and Business Intelligence organization by focusing on deep data engineering projects. Joining the data platform team in Bengaluru, you will have the opportunity to contribute to defining the technical strategy and data engineering team culture in India. Your responsibilities will include designing and constructing data platforms and services, as well as managing data infrastructure in cloud environments to support strategic business decisions across Lifesight products. You will be expected to build highly scalable distributed data processing systems, data solutions, and data pipelines that optimize data quality and are resilient to poor-quality data sources. Additionally, you will own data mapping, business logic, transformations, and data quality, while participating in architecture discussions, influencing the product roadmap, and taking ownership of new projects. The ideal candidate for this role should possess proficiency in Python and PySpark, a deep understanding of Apache Spark, experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, and Presto, and familiarity with distributed database systems. Experience working with various file formats like Parquet, Avro, and NoSQL databases, as well as AWS and GCP, is preferred. A minimum of 5 years of professional experience as a data or software engineer is required for this full-time position. If you are a self-starter who is passionate about data engineering, ready to work with big data technologies, and eager to collaborate with a team of engineers while mentoring others, we encourage you to apply for this exciting opportunity at Lifesight.,

Posted 3 days ago

Apply

3.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category: Testing/Quality Assurance Main location: India, Karnataka, Bangalore Position ID: J0725-1442 Employment Type: Full Time Position Description: Position Description Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: ETL Testing Engineer Position: Senior test engineer Experience: 3-9 Years Category: Quality assurance/Software Testing. Shift: 1-10 pm/UK Shift Main location: Chennai/Bangalore. Position ID: J0725-1442 Employment Type: Full Time Position Description: We are looking for an experienced DataStage tester to join our team. The ideal candidate should be passionate about coding and testing scalable and high-performance applications Your future duties and responsibilities: Develop and execute ETL test cases to validate data extraction, transformation, and loading processes. Write complex SQL queries to verify data integrity, consistency, and correctness across source and target systems. Automate ETL testing workflows using Python, PyTest, or other testing frameworks. Perform data reconciliation, schema validation, and data quality checks. Identify and report data anomalies, performance bottlenecks, and defects. Work closely with Data Engineers, Analysts, and Business Teams to understand data requirements. Design and maintain test data sets for validation. Implement CI/CD pipelines for automated ETL testing (Jenkins, GitLab CI, etc.). Document test results, defects, and validation reports. Required qualifications to be successful in this role: ETL Testing: Strong experience in testing Informatica, Talend, SSIS, Databricks, or similar ETL tools. SQL: Advanced SQL skills (joins, aggregations, subqueries, stored procedures). Python: Proficiency in Python for test automation (Pandas, PySpark, PyTest). Databases: Hands-on experience with RDBMS (Oracle, SQL Server, PostgreSQL) & NoSQL (MongoDB, Cassandra). Big Data Testing (Good to Have): Hadoop, Hive, Spark, Kafka. Testing Tools: Knowledge of Selenium, Airflow, Great Expectations, or similar frameworks. Version Control: Git, GitHub/GitLab. CI/CD: Jenkins, Azure DevOps, or similar. Soft Skills: Strong analytical and problem-solving skills. Ability to work in Agile/Scrum environments. Good communication skills for cross-functional collaboration. Preferred Qualifications: Experience with cloud platforms (AWS, Azure). Knowledge of Data Warehousing concepts (Star Schema, Snowflake Schema). Certification in ETL Testing, SQL, or Python is a plus. Skills: Data Warehousing MS SQL Server Python What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You will be responsible for working on Google Cloud Platform (GCP) projects, specifically focusing on the following skills: - Strong proficiency in Bigquery - Experience in ETLs to Google Cloud Platform - Proficient in SQL writing - Knowledge of Python programming - Familiarity with PostgreSQL and MongoDB databases - Experience with Data Build Tool (DBT) - Working knowledge of Airflow - Familiarity with Google Cloud Composer and Google Cloud PubSub - Experience in data extraction and transformation Ideal candidates should have a notice period of 0 to 45 days and possess a Bachelor's degree in a related field. This position is based in Bangalore. If you are interested in this opportunity, please send your resume to career@krazymantra.com.,

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. As an Infrastructure Engineer, you will be responsible for the technical design, planning, implementation, and optimization of performance tuning and recovery procedures for critical enterprise systems and applications. You will serve as the technical authority in system administration for complex SaaS, local, and cloud-based environments. Your role is critical in ensuring the high availability, reliability, and scalability of our infrastructure components. You will also be involved in designing philosophies, tools, and processes to enable the rapid delivery of evolving products. In This Role You Will Design, configure, and document cloud-based infrastructures using AWS Virtual Private Cloud (VPC) and EC2 instances in AWS. Secure and monitor hosted production SaaS environments provided by third-party partners. Define, document, and manage network configurations within AWS VPCs and between VPCs and data center networks, including firewall, DNS, and ACL configurations. Lead the design and review of developer work on DevOps tools and practices. Ensure high availability and reliability of infrastructure components through monitoring and performance tuning. Implement and maintain security measures to protect infrastructure from threats. Collaborate with cross-functional teams to design and deploy scalable solutions. Automate repetitive tasks and improve processes using scripting languages such as Python, PowerShell, or BASH. Support Airflow DAGs in the Data Lake, utilizing the Spark framework and Big Data technologies. Provide support for infrastructure-related issues and conduct root cause analysis. Develop and maintain documentation for infrastructure configurations and procedures. Administer databases, handle data backups, monitor databases, and manage data rotation. Work with RDBMS and NoSQL systems, leading stateful data migration between different data systems. Experience & Qualifications Bachelor’s or Master’s degree in Information Science, Computer Science, Business, or equivalent work experience. 3-5 years of experience with Amazon Web Services, particularly VPC, S3, EC2, and EMR. Experience in setting up new VPCs and integrating them with existing networks is highly desirable. Experience in maintaining infrastructure for Data Lake/Big Data systems built on the Spark framework and Hadoop technologies. Experience with Active Directory and LDAP setup, maintenance, and policies. Workday certification is preferred but not required. Exposure to Workday Integrations and Configuration is preferred. Strong knowledge of networking concepts and technologies. Experience with infrastructure automation tools (e.g., Terraform, Ansible, Chef). Familiarity with containerization technologies like Docker and Kubernetes. Excellent problem-solving skills and attention to detail. Strong verbal and written communication skills. Understanding of Agile project methodologies, including Scrum and Kanban, is required. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Data Engineer at our Pune location, you will play a critical role in designing, developing, and maintaining scalable data pipelines and architectures using Data bricks on Azure/AWS cloud platforms. With 6 to 9 years of experience in the field, you will collaborate with stakeholders to integrate large datasets, optimize performance, implement ETL/ELT processes, ensure data governance, and work closely with cross-functional teams to deliver accurate solutions. Your responsibilities will include building, maintaining, and optimizing data workflows, integrating datasets from various sources, tuning pipelines for performance and scalability, implementing ETL/ELT processes using Spark and Data bricks, ensuring data governance, collaborating with different teams, documenting data pipelines, and developing automated processes for continuous integration and deployment of data solutions. To excel in this role, you should have 6 to 9 years of hands-on experience as a Data Engineer, expertise in Apache Spark, Delta Lake, Azure/AWS Data bricks, proficiency in Python, Scala, or Java, advanced SQL skills, experience with cloud data platforms, data warehousing solutions, data modeling, ETL tools, version control systems, and automation tools. Additionally, soft skills such as problem-solving, attention to detail, and ability to work in a fast-paced environment are essential. Nice to have skills include experience with Data bricks SQL and Data bricks Delta, knowledge of machine learning concepts, and experience in CI/CD pipelines for data engineering solutions. Joining our team offers challenging work with international clients, growth opportunities, a collaborative culture, and global project involvement. We provide competitive salaries, flexible work schedules, health insurance, performance-based bonuses, and other standard benefits. If you are passionate about data engineering, possess the required skills and qualifications, and thrive in a dynamic and innovative environment, we welcome you to apply for this exciting opportunity.,

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies