Jobs
Interviews

37875 Reliability Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

25 Lacs

Kochi, Kerala, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

3.0 years

25 Lacs

Indore, Madhya Pradesh, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

3.0 years

25 Lacs

Greater Bhopal Area

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

3.0 years

25 Lacs

Visakhapatnam, Andhra Pradesh, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

3.0 years

25 Lacs

Chandigarh, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

3.0 years

25 Lacs

Dehradun, Uttarakhand, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

3.0 years

25 Lacs

Mysore, Karnataka, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

3.0 years

25 Lacs

Vijayawada, Andhra Pradesh, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

3.0 years

25 Lacs

Patna, Bihar, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

3.0 years

25 Lacs

Thiruvananthapuram, Kerala, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Overview We are seeking a skilled SAP S/4HANA Process Release Management Engineer to manage and oversee the release processes, deployment strategies, and change management within SAP S/4HANA environments. The ideal candidate will have expertise in SAP Solution Manager (SolMan), ChaRM (Change Request Management), and ITIL-based release management processes , ensuring smooth and controlled software releases and system updates. Key Responsibilities: Release & Deployment Management: Develop and maintain release schedules for SAP S/4HANA updates, enhancements, and patches . Implement/coordinate Change Request Management (ChaRM) to control SAP landscape changes effectively. Coordinate cross-functional teams to ensure smooth deployment of releases with minimal business disruption. Enforce governance and compliance in SAP release processes. 2. Process & Configuration Management Ensure best practices for SAP S/4HANA system configurations and custom developments changes. Coordinate / manage transport and deployment strategies using SAP Transport Management System (TMS) . Oversee regression testing and impact analysis before releases go live. Automation and Tool Management Collaborate with development teams to ensure understanding and effective use of automation tools (e.g., Solution Manager) for software development and deployment. 3. Issue Resolution and Support: Act as an escalation point for release-related queries and provide L2/L3 support for urgent issues, maintaining and updating reports related to releases, and addressing issues to prevent future problems. 4. Stakeholder Coordination & Support: Collaborate with functional, technical, and business teams to align release strategies with business goals. Provide support during go-live and post-implementation phases, addressing any release-related issues. Conduct training and knowledge-sharing sessions on SAP release and change management processes. 5. Continuous Improvement & Compliance: Identify areas for automation and process improvements in release management. Ensure compliance with ITIL, SOX, and other regulatory requirements for SAP change and release processes. Monitor and track SAP system performance to enhance reliability and stability. Required Qualifications: Education & Experience: Bachelor’s or master’s degree in computer science , Information Technology, Business Administration , or related field or a STEM-related field 5+ years of experience in SAP release, deployment, and change management within an SAP S/4HANA environment. Technical Skills: Expertise in SAP Solution Manager (SolMan), ChaRM, and Transport Management System (TMS) . Strong knowledge of SAP S/4HANA architecture, cloud/hybrid landscapes, and upgrade methodologies . Experience with ABAP transports, CTS+, and Retrofit processes . Familiarity with DevOps, Agile, CI/CD tools , and automation for SAP landscape management is a plus. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities to work with cross-functional teams. Detail-oriented with a focus on risk management and compliance.

Posted 6 days ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About the Role: We are looking for a Data Scientist with 2–3 years of experience to join our growing Industrial AI team. This role is ideal for someone who has worked on data science projects in the manufacturing, oil & gas, or utilities sector—particularly around asset reliability, predictive maintenance, and failure prediction. Exposure to Gen AI applications (e.g., document intelligence, knowledge retrieval) is highly desirable. You will be responsible for developing and deploying ML/AI solutions that drive operational excellence, reduce unplanned downtime, and improve decision-making using both structured and unstructured data. Key Responsibilities: Develop predictive models to forecast equipment failures, optimize maintenance cycles, and improve asset reliability. Work on time-series sensor data and apply statistical and machine learning techniques for anomaly detection and condition monitoring. Build and deploy Generative AI solutions using open-source and proprietary LLMs to automate insights generation from industrial documents, manuals, and maintenance logs. Collaborate with domain experts, data engineers, and MLOps teams to ensure successful deployment and scaling of AI solutions. Evaluate and fine-tune ML models using real-world production data. Interpret and communicate model results clearly to non-technical stakeholders. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or related field. 2–3 years of hands-on experience in industrial data science or reliability engineering analytics. Strong programming skills in Python and familiarity with ML libraries like scikit-learn, XGBoost, PyTorch/TensorFlow. Experience with time-series analysis, anomaly detection, and survival models. Familiarity with Generative AI concepts and experience using models like GPT, Llama, or Qwen for industrial use cases. Experience with cloud platforms (GCP, AWS, Azure) is a plus. Knowledge of manufacturing/O&G plant data like sensor logs, DCS/SCADA systems, CMMS, etc. is a strong plus. Nice to Have: Experience working with industrial IoT data pipelines. Exposure to visualization tools like Power BI or Plotly Dash for operational dashboards. Prior involvement in AI POCs or production deployments for industrial clients.

Posted 1 week ago

Apply

3.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job description: Job Summary We are looking for Automation Engineers/Senior Automation Engineers in the Global Automation Remote Support Hub in Chakan India, who will actively provide 24/7 automation remote support services to our customers globally. The successful candidate in this position should be experienced with excellent A&E (automation & electrical) skills. In this role you will solve automation/electrical issues remotely through direct interaction with customers and shift engineers. You will have the chance to work with the newest technology in remote support and test new ways of working while providing these services. You will be responsible for contributing to customer satisfaction by delivering high quality and fast issue resolution. Having a good understanding of production solutions in the liquid/powder/prepared food industry is necessary. What you will do Ensure fast issue resolution adhering to high quality standards using the PSM methodology and IR escalation process. Troubleshoot and resolve issues related to automated systems and customer queries. Timely complete documentation associated with each case Regularly communicate issue status and next steps to stakeholders. Use global standards, processes, and tools. Seek feedback from customers and account teams. Implement action plans to enhance customer satisfaction. Work with other teams in the organisation to ensure service delivery meets expectations of both the account team and customer stakeholders. Assist in training new support staff and to continuously improve working practices. Profile description: We believe you have 3-12 years of relevant experience in automation, Strong technical expertise in automation PLC/HMI/SCADA/PI. Good understanding of Tetra Pak PlantMaster® PC & PI is and added benefit Good understanding of Electrical & Control Panels to support remote troubleshooting Focus on delivering high quality solutions. Fluency in English, both written and spoken High level knowledge of problem-solving methodology Good understanding of maintenance and reliability concepts. Ability to collaborate effectively with teams. Ability to work in rotational shifts to provide 24/7 support. Ability to communicate with customers and manage their expectations. Experience in working with colleagues and customers across multiple countries would be an advantage. We offer: We offer you • Variety of exciting challenges with ample opportunities for development and training in a truly global landscape • Culture that pioneers spirit of innovation where our engineering genius drives visible result • Equal opportunity employment experience that values difference and diversity • Market competitive compensation and benefits with flexible working arrangements Apply Now If you are excited for a new adventure at Tetra Pak, please submit your resume in English through myLink. This job posting expires on 30/08/2025 . Diversity, equity, and inclusion is an everyday part of how we work. We give people a place to belong and support to thrive, an environment where everyone can be comfortable being themselves and has equal opportunities to grow and succeed. We embrace difference, celebrate people for who they are, and for the diversity they bring that helps us better understand and connect with our customers and communities worldwide. Ankur Shrivastava

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Bachelor’s degree in Engineering, Environmental Science, or related field 4–6 years of experience in compliance and reliability engineering Basic to intermediate knowledge of RoHS, REACH, and CP65 regulations Experience with compliance documentation and supplier communication Strong analytical and problem-solving skills Excellent written and verbal communication skills Ability to work independently and collaboratively in a fast-paced environment

Posted 1 week ago

Apply

3.0 years

25 Lacs

Agra, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary Job Responsibilities for an Akamai Engineer Akamai Configuration and Management: Configure and manage Akamai services, including CDN, Web Application Firewall (WAF), and other security features. Monitor and optimize Akamai settings to ensure optimal performance and reliability. Performance Optimization: Analyze website performance metrics and implement strategies to improve load times and user experience. Use Akamai tools to identify bottlenecks and recommend solutions to enhance content delivery. Security Implementation: Implement security measures using Akamai's WAF to protect against DDoS attacks, SQL injection, and other threats.

Posted 1 week ago

Apply

3.0 years

25 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description About BIMAWALA Group: BIMAWALA Group is a trusted insurance broking firm established in 1956, specializing in health, life, fire, motor, marine, and other insurance solutions. With a strong focus on selecting the best products and providing exceptional claim support, we have built a reputation for reliability and customer-centric service. As part of our expansion, we are looking for a Sales Executive to strengthen our client outreach, drive business growth, and enhance customer relationships. Role Overview the Sales Executive will be responsible for identifying potential clients, understanding their insurance needs, and offering tailored insurance solutions. This role requires a proactive approach to sales, strong relationship management skills, and a commitment to customer satisfaction. Key Responsibilities Lead Generation & Prospecting: Identify and approach potential clients, including individuals, SMEs, and corporates. Build a strong pipeline through networking, referrals, and market research. Client Meetings & Consultations: Meet clients in person to understand their insurance requirements. Educate customers on various insurance policies and recommend suitable options. Address client queries, objections, and concerns professionally. Sales & Business Development: Achieve and exceed sales targets through proactive outreach and client engagement. Develop and maintain long-term relationships with customers. Cross-sell and upsell various insurance products. Follow-ups & Relationship Management: Ensure timely follow-ups with prospects and existing clients. Provide post-sales support, including policy servicing and renewal reminders. Act as a trusted advisor to clients for their insurance needs. Market & Competitor Analysis: Stay updated on industry trends, competitor offerings, and regulatory changes. Provide feedback to the management team for product and strategy improvement. Coordination & Reporting: Collaborate with internal teams for policy issuance and claim support. Maintain records of leads, sales, and client interactions. Submit regular sales reports to management. Key Requirements: 0-12 Months of sales experience, preferably in insurance, banking, or financial services. Strong communication and negotiation skills. Self-motivated with a results-driven approach. Ability to build rapport and establish long-term client relationships. Willingness to travel extensively within the assigned territory. Basic knowledge of insurance products is a plus (training will be provided). What We Offer: Attractive incentive structure based on performance. Training and development opportunities to enhance your insurance knowledge and sales skills. A supportive work environment with career growth prospects. Opportunity to work with a well-established brand in the insurance industry. If you are a dynamic, self-driven professional looking for an exciting opportunity in insurance sales, we would love to hear from you Interested candidates drop your CV on below mentioned E-Mail ID hr@bimawalagroup.com

Posted 1 week ago

Apply

3.0 years

25 Lacs

Kolkata, West Bengal, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

10.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities As a Workday Core HCM Consultant, you will play a crucial role in configuring, implementing, and maintaining Workday Human Capital Management modules for our clients. You will collaborate closely with Client, HR stakeholders, business partners, and technical teams to ensure that Workday functionalities align with business requirements and enhance HR processes and operations. You will work developing solutions that excel at user experience, style, performance, reliability and scalability to reduce costs and improve profit and shareholder value. Your Primary Responsibilities Include Configure and maintain Workday Core HCM modules, including but not limited to: Human Resources, Core Compensation, Reporting and Security. Manage and lead large/mid-size team and contribute to successful project delivery. Optimize and maintain the Workday system, ensuring data integrity, system security, and compliance with regulatory standards. Collaborate with technical teams to design, develop, and test integrations between Workday and other HR systems or third-party applications. Stay updated with Workday releases, new features, and industry trends, evaluating their impact, and proposing relevant system enhancements. Preferred Education Master's Degree Required Technical And Professional Expertise 10+ years of overall industry experience handling both implementation and AMS projects Experiences in Complex project delivery and transformation for at least two global implementations 7+ years of experience in implementing/rollout Workday HCM for customer of all size. Experience of leading large/mid-size team and contributing positively to the success of the team Experience in conducting design workshops, gathering requirements, design, prototype, and testing of Workday HCM solutions according to customer requirement. Preferred Technical And Professional Experience Certified Workday HCM professional with significant hands-on experience in configuring and supporting Workday Core HCM modules. Certification or experience in any other advanced modules are preferred. Knowledge of core HCM localization and legislative requirements in various countries in APAC, Europe, and North America

Posted 1 week ago

Apply

3.0 years

25 Lacs

Surat, Gujarat, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

25 Lacs

Cuttack, Odisha, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

25 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

25 Lacs

Guwahati, Assam, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

25 Lacs

Raipur, Chhattisgarh, India

Remote

Experience : 3.00 + years Salary : USD 2500000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Texplorers) (*Note: This is a requirement for one of Uplers' client - Uplers) What do you need for this opportunity? Must have skills required: Databricks, PySpark, Apache Spark, AWS, MySQL Uplers is Looking for: About The Job Please make sure you are well experienced with Azure Synapse/Databricks and Apache Spark. Without these skills, please do not apply for this position. We are also looking for someone who can join immediately(Within a week). Job Description We are seeking a skilled Data Engineer with proficient knowledge in Spark and SQL to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and optimizing data pipelines on our Data platform. You will work closely with data architects, and other stakeholders to ensure data accessibility, reliability, and performance. Key Responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Azure Synapse, Databricks & Apache Spark (PySpark). Data Integration: Integrate data from various sources, ensuring data quality and consistency. Performance Optimization: Optimize data processing workflows for performance and cost-efficiency. Collaboration: Work with data architects, analysts, and product owners to understand data requirements and deliver solutions. Monitoring and Troubleshooting: Monitor data pipelines and troubleshoot issues to ensure data integrity. Documentation: Document data workflows, processes, and best practices. Skills Technical Skills: Proficiency in Azure Synapse/Databricks and Apache Spark. Strong PySpark and SQL skills for data manipulation and querying. Familiarity with Delta Live Tables and Databricks workflows. Experience with ETL tools and processes. Knowledge of cloud platforms (AWS, Azure, GCP). Soft Skills: Excellent problem-solving abilities. Strong communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies