Jobs
Interviews

671 Distributed Computing Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

5 - 9 Lacs

Chennai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the existing infrastructure. You will also engage in testing...

Posted 3 months ago

Apply

3.0 - 5.0 years

22 - 30 Lacs

Noida

Hybrid

When visionary companies need to know how their world-changing ideas will perform, they close the gap between design and reality with Ansys simulation. For more than 50 years, Ansys software has enabled innovators across industries to push boundaries by using the predictive power of simulation. From sustainable transportation to advanced semiconductors, from satellite systems to life-saving medical devices, the next great leaps in human advancement will be powered by Ansys. Innovate With Ansys, Power Your Career. Summary / Role Purpose The Senior R&D Engineer is responsible for the development of software products for semiconductor analysis. In this role, the Senior R&D Engineer will use adv...

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Bengaluru

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Gurugram

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Hyderabad

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Mumbai

Remote

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Jaipur

Remote

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Chennai

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 3 months ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements. Your typical day will revolve around creating innovative solutions to address various business needs and ensuring seamless application functionality. Roles & Responsibilities:- Expected to be an SME- Collaborate and manag...

Posted 3 months ago

Apply

2.0 - 7.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/...

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Kolkata

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Noida

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Ahmedabad

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Pune

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 3 months ago

Apply

1.0 - 5.0 years

9 - 13 Lacs

Bengaluru

Work from Office

We are looking for a skilled and experienced PySpark Tech Lead to join our dynamic engineering team In this role, you will lead the development and execution of high-performance big data solutions using PySpark You will work closely with data scientists, engineers, and architects to design and implement scalable data pipelines and analytics solutions. As a Tech Lead, you will mentor and guide a team of engineers, ensuring the adoption of best practices for building robust and efficient systems while driving innovation in the use of data technologies. Key Responsibilities Lead and DevelopDesign and implement scalable, high-performance data pipelines and ETL processes using PySpark on distribu...

Posted 3 months ago

Apply

1.0 - 5.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Job TitleData Engineer Experience5"“8 Years LocationDelhi, Pune, Bangalore (Hyderabad & Chennai also acceptable) Time ZoneAligned with UK Time Zone Notice PeriodImmediate Joiners Only Role Overview: We are seeking experienced Data Engineers to design, develop, and optimize large-scale data processing systems You will play a key role in building scalable, efficient, and reliable data pipelines in a cloud-native environment, leveraging your expertise in GCP, BigQuery, Dataflow, Dataproc, and more Key Responsibilities: Design, build, and manage scalable and reliable data pipelines for real-time and batch processing. Implement robust data processing solutions using GCP services and open-source t...

Posted 3 months ago

Apply

3.0 - 7.0 years

11 - 15 Lacs

Gurugram

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, develop, and maintain scalable data/code pipelines using Azure Databricks, Apache ...

Posted 3 months ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Mumbai, Hyderabad

Work from Office

Job Responsibilities. Collaborate with data scientists, software engineers, and business stakeholders to understand data requirements and design efficient data models.. Develop, implement, and maintain robust and scalable data pipelines, ETL processes, and data integration solutions.. Extract, transform, and load data from various sources, ensuring data quality, integrity, and consistency.. Optimize data processing and storage systems to handle large volumes of structured and unstructured data efficiently.. Perform data cleaning, normalization, and enrichment tasks to prepare datasets for analysis and modelling.. Monitor data flows and processes, identify and resolve data-related issues and ...

Posted 3 months ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Chennai

Work from Office

Key Responsibilities : - Develop, deploy, and maintain scalable web applications using Python (Flask/Django). - Design and implement RESTful APIs with strong security and authentication mechanisms. - Work with MongoDB and other database management systems to store and query data efficiently. - Support and productize Machine Learning models, including feature engineering, training, tuning, and scoring. - Understand and apply distributed computing concepts to build high-performance systems. - Handle web hosting and deployment of applications, ensuring uptime and performance. - Collaborate with stakeholders to translate business requirements into technical solutions. - Communicate effectively w...

Posted 3 months ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Mumbai

Work from Office

AryaXAI stands at the forefront of AI innovation, revolutionizing AI for mission-critical businesses by building explainable, safe, and aligned systems that scale responsibly Our mission is to create AI tools that empower researchers, engineers, and organizations to unlock AI's full potential while maintaining transparency and safety, Our team thrives on a shared passion for cutting-edge innovation, collaboration, and a relentless drive for excellence At AryaXAI, everyone contributes hands-on to our mission in a flat organizational structure that values curiosity, initiative, and exceptional performance Qualification s:This is a full-time remote role for a Principal Engineering Manager at Ar...

Posted 3 months ago

Apply

9.0 - 14.0 years

50 - 85 Lacs

Noida

Work from Office

About the Role We are looking for a Staff Engineer to lead the design and development of a scalable, secure, and robust data platform. You will play a key role in building data platform capabilities for data quality, metadata management, lineage tracking, and compliance across all data layers. If youre passionate about building foundational data infrastructure that accelerates innovation in healthcare, wed love to talk. A Day in the Life Architect, design, and build scalable data governance tools and frameworks. Collaborate with cross-functional teams to ensure data compliance, security, and usability. Lead initiatives around metadata management, data lineage, and data cataloging. Define and...

Posted 3 months ago

Apply

10 - 15 years

25 - 40 Lacs

Pune

Hybrid

Description BS/MS degree in Computer Science or equivalent 1015 years of experience building products on distributed systems, preferably in the Data Security domain Working knowledge of the security domain - Ransomware protection, Anomaly detection, data classification and compliance of unstructured data. Strong knowledge of Cloud platform, APIs, containers, Kubernetes, and Snowflake Knowledge of building micro-service-based applications. Hands-on development in either Golang or Python Strong development experience in Linux/Unix OS platform

Posted 4 months ago

Apply

5 - 10 years

35 - 50 Lacs

Bengaluru

Work from Office

Position summary: We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. Key Roles & Responsibilities: Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake. Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP). Develop high-performance, scalable data pipelines for structure...

Posted 4 months ago

Apply

4 - 6 years

6 - 8 Lacs

Bengaluru

Work from Office

Why Join Us? Do you like to challenge yourself and find innovative solutions to problems, translating them into efficient code? Do you always enjoy learning new things? Are you ready to work on the next-generation synthesis solution technology? If the answer is yes, then come work with us! We dont need superheroes, just super minds. Key Responsibilities: The key responsibilities include owning the design, development, and optimization of sophisticated systems in C/C++ for image and signal processing. The role involves developing and implementing algorithms with a strong focus on performance, scalability, and efficiency. Additionally, the position requires leveraging machine learning techniqu...

Posted 4 months ago

Apply

2 - 6 years

12 - 16 Lacs

Bengaluru

Work from Office

Siemens EDA is a global technology leader in Electronic Design Automation software. Our software tools enable companies around the world to develop highly innovative electronic products faster and more efficiently. Our customers use our tools to push the boundaries of technology and physics to deliver better products in the increasingly sophisticated world of chip, board, and system design. Key Responsibilities: The key responsibilities include leading the design, development, and optimization of complex systems in C/C++ for image and signal processing. The role involves developing and implementing algorithms with a strong focus on performance, scalability, and efficiency. Additionally, the ...

Posted 4 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies