Jobs
Interviews

280 Apache Spark Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

noida, uttar pradesh

On-site

As an Individual Contributor at Adobe RTCDP, you will play a crucial role in the development of features of medium to large complexity. You will leverage your in-depth knowledge to transform requirements into feature specifications. Collaborating with product management and Engineering leads, you will contribute significantly to the analysis, design, prototype, and implementation of new features, as well as enhancements to existing ones. As a proactive self-starter and fast learner, you will be responsible for developing methods, techniques, and evaluation criteria to achieve desired results. Ensuring high-quality code and related documentation will be a key part of your responsibilities. To succeed in this role, you should hold a B.Tech/M.Tech from a reputable institute and possess 2 to 5 years of hands-on design and development experience in software development, preferably within a product development organization. Proficiency in Java/Scala, hands-on experience with REST APIs, and a proven understanding of frameworks such as Springboot, Apache Spark, and Kafka are essential. Your knowledge of software fundamentals, including algorithm design and analysis, data structure design and implementation, documentation, and unit testing, will be crucial. A solid understanding of object-oriented design, product life cycles, and associated issues is required. Demonstrating excellent computer science fundamentals, architectural understanding, design skills, and performance knowledge will be advantageous. Your ability to work independently, proactively, and collaboratively, with strong written and oral communication skills, will be highly valued. At Adobe, we value creativity, curiosity, and continuous learning as integral parts of your career growth journey. We encourage you to update your Resume/CV and Workday profile, highlighting your unique Adobe experiences and volunteer work. Explore internal mobility opportunities on Inside Adobe and prepare for interviews by following the provided tips. Upon applying for a role via Workday, our Talent Team will contact you within 2 weeks. If you progress to the official interview stage, please inform your manager to support your career advancement. Joining Adobe means immersing yourself in a globally recognized exceptional work environment. You will work alongside colleagues committed to mutual growth through our Check-In approach, fostering ongoing feedback. If you seek to make a meaningful impact, Adobe is the ideal place for you. Discover firsthand employee experiences on the Adobe Life blog and explore our comprehensive benefits package. Adobe is dedicated to ensuring accessibility for all users on Adobe.com. If you require accommodations due to a disability or special needs during the website navigation or application process, please contact accommodations@adobe.com or call (408) 536-3015.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer (A2), your main responsibilities will revolve around designing and developing AI-driven data ingestion frameworks and real-time processing solutions to enhance data analysis and machine learning capabilities across the full technology stack. You will be tasked with deploying, maintaining, and supporting application codes and machine learning models in production environments while ensuring seamless integration with front-end and back-end systems. Additionally, you will create and improve AI solutions that facilitate the smooth flow of data across the data ecosystem, enabling advanced analytics and insights for end users. Your role will also involve conducting business analysis to gather requirements and develop ETL processes, scripts, and machine learning pipelines that meet technical specifications and business needs using both server-side and client-side technologies. You will be responsible for developing real-time data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark, Python, and cloud platforms to support AI applications. Utilizing multiple programming languages and tools like Python, Spark, Hive, Presto, Java, and JavaScript frameworks, you will build prototypes for AI models and assess their effectiveness and feasibility. It will be essential to develop application systems adhering to standard software development methodologies to deliver high-performance AI solutions across the full stack. Collaborating with other engineers, you will provide system support to resolve issues and enhance system performance for both front-end and back-end components. Furthermore, you will operationalize open-source AI and data-analytic tools for enterprise-scale applications, ensuring alignment with organizational needs and user interfaces. Compliance with data governance policies by implementing and validating data lineage, quality checks, and data classification in AI projects will be a crucial aspect of your role. You will need to understand and follow the company's software development lifecycle effectively to develop, deploy, and deliver AI solutions. In terms of technical skills, you are expected to have a strong proficiency in Python, Java, C++, and familiarity with machine learning frameworks like TensorFlow and PyTorch. A deep understanding of ML, Deep Learning, and NLP algorithms is also required. Proficiency in building backend services using frameworks like FastAPI, Flask, and Django, as well as full-stack development skills with JavaScript frameworks such as React and Angular, will be essential for integrating user interfaces with AI models and data solutions. Preferred technical skills include expertise in big data processing technologies like Azure Databricks and Apache Spark to handle, analyze, and process large datasets for machine learning and AI applications. Additionally, certifications such as Microsoft Certified: Azure Data Engineer Associate or Azure AI Engineer are considered advantageous. To excel in this role, you should possess strong oral and written communication skills to effectively communicate technical and non-technical concepts to peers and stakeholders. Being open to collaborative learning, able to manage project components beyond individual tasks, and having a good understanding of business objectives driving data needs will be key behavioral attributes for success. This role is suitable for individuals holding a Bachelors or Masters degree in Computer Science with 2 to 4 years of software engineering experience.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Senior Programmer Analyst position is an intermediate level role where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will also be monitoring and controlling all phases of the development process, providing user and operational support on applications to business users, and recommending and developing security measures post-implementation to ensure successful system design and functionality. Furthermore, you will be utilizing in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgments. You will consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and ensure essential procedures are followed while defining operating standards and processes. As an Applications Development Senior Programmer Analyst, you will also serve as an advisor or coach to new or lower-level analysts, operate with a limited level of direct supervision, and exercise independence of judgment and autonomy. You will act as a subject matter expert to senior stakeholders and/or other team members and appropriately assess risk when making business decisions. Qualifications: - Must Have: - 8+ years of application/software development/maintenance - 5+ years of experience on Big Data Technologies like Apache Spark, Hive, Hadoop - Knowledge of Python, Java, or Scala programming language - Experience with JAVA, Web services, XML, Java Script, Microservices, etc. - Strong technical knowledge of Apache Spark, Hive, SQL, and Hadoop ecosystem - Experience with developing frameworks and utility services, code quality tools - Ability to work independently, multi-task, and take ownership of various analyses - Strong analytical and communication skills - Banking domain experience is a must - Good to Have: - Work experience in Citi or Regulatory Reporting applications - Hands-on experience on cloud technologies, AI/ML integration, and creation of data pipelines - Experience with vendor products like Tableau, Arcadia, Paxata, KNIME - Experience with API development and data formats Education: Bachelors degree/University degree or equivalent experience This job description provides a high-level overview of the work performed. Other job-related duties may be assigned as required.,

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As an individual contributor at P50 level, you will have the opportunity to work with our engineering team on developing the Adobe Experience Platform. This platform offers innovative data management and analytics solutions. Our focus is on building a reliable and resilient system at a large scale, utilizing Big Data and open-source technologies for Adobe's services. You will be responsible for managing disparate data sources and ingestion mechanisms across geographies, ensuring that the data is easily accessible at very low latency to support various scenarios and use cases. We are looking for candidates with deep expertise in building low latency services at high scales to lead us in accomplishing our vision. To succeed in this role, you should have at least 8 years of experience in designing and developing data-driven large distributed systems, along with 3+ years of experience as an architect building large-scale data-intensive distributed systems and services. Experience in building application layers on Apache Spark, strong proficiency in Hive SQL and Presto DB, and familiarity with technologies like Apache Kafka, Apache Spark, Kubernetes, etc., are essential. Additionally, experience with big data technologies on public clouds such as Azure, AWS, or Google Cloud Platform, as well as in-memory distributed caches like Redis, Memcached, is required. Strong coding and design skills, proficiency in data structures and algorithms, and excellent verbal and written communication skills are also necessary. A BTech/MTech/MS in Computer Science is preferred. In this role, you will lead the technical design and implementation strategy for major systems and components of the Adobe Experience Platform. You will evaluate and drive architecture and technology choices, design, build, and deploy products with outstanding quality, and innovate the current system to improve robustness, ease, and convenience. Your responsibilities will also include articulating design and code choices to cross-functional teams, mentoring and guiding a high-performing team, reviewing and providing feedback on features, technology, architecture, design, time & budget estimates, and test strategies, engaging in creative problem-solving, and developing and evolving engineering standard methodologies to improve team efficiency. Collaboration with other teams across Adobe to achieve common goals will be a key aspect of this role. At Adobe, we celebrate creativity, curiosity, and constant learning as essential components of your career growth journey. We encourage you to update your Resume/CV and Workday profile, including your unique Adobe experiences and volunteer work. Internal opportunities for career growth are available, and we provide resources to help you prepare for interviews and navigate the internal mobility process. If you apply for a role via Workday, the Talent Team will reach out to you within 2 weeks. We strive to create an exceptional work environment where ongoing feedback flows freely, and colleagues are committed to helping each other grow. If you are looking to make an impact, Adobe is the place for you. For any accessibility accommodations or assistance during the application process, please contact accommodations@adobe.com or call (408) 536-3015.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Backend Engineer with 7 to 10 years of experience, you will be responsible for developing backend systems and APIs using Python programming. You should have a strong understanding of Cloud platforms such as AWS or GCP and be proficient in CI/CD, Docker, and Linux. Your expertise in microservices architecture will be crucial for designing scalable and efficient systems. The ideal candidate will have hands-on experience with Cloud platforms like GCP or AWS and a good command of Python programming. Additionally, familiarity with data pipelines tools such as Airflow or Netflix Conductor would be advantageous. Experience working with Apache Spark/Beam and Kafka will be considered a plus. Key Skills: - Python programming - Backend/API development - Cloud experience with AWS or GCP - CI/CD, Docker, Linux - Microservices architecture Nice to have: - Experience with data pipelines (Airflow, Netflix Conductor) - Knowledge of Apache Spark/Beam, Kafka In this role, you will play a vital part in building and maintaining robust backend systems that power our applications. Your contributions will directly impact the scalability and performance of our services, making this an exciting opportunity for someone passionate about backend development and cloud technologies.,

Posted 2 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

maharashtra

On-site

At PwC, the focus is on leveraging data to drive insights and make informed business decisions in the field of data and analytics. The team utilizes advanced analytics techniques to help clients optimize their operations and achieve strategic goals. As a Data Analyst at PwC, your role will involve utilizing advanced analytical techniques to extract insights from large datasets and facilitate data-driven decision-making. Your responsibilities will include data manipulation, visualization, and statistical modeling to support clients in solving complex business problems. With a minimum of 12 years of hands-on experience, you will be responsible for designing and implementing scalable, secure, and high-performance architectures for Generative AI applications. This involves integrating Generative AI models into existing platforms, fine-tuning pre-trained generative models for domain-specific use cases, and developing strategies for data collection, sanitization, and preparation. You will also be evaluating, selecting, and deploying appropriate Generative AI frameworks such as PyTorch, TensorFlow, Crew AI, Autogen, Langraph, Agentic code, and Agentflow. Staying abreast of the latest advancements in Generative AI and recommending innovative applications to solve complex business problems is crucial. You will define and execute the AI strategy roadmap, identifying key opportunities for AI transformation. Collaboration with cross-functional teams, mentoring team members on AI/ML best practices, and making architectural decisions are essential aspects of the role. Monitoring the performance of deployed AI models and systems, optimizing computational costs, and ensuring compliance with ethical AI practices and data privacy regulations are key responsibilities. Safeguards must be implemented to mitigate bias, misuse, and unintended consequences of Generative AI. Required skills for this role include advanced programming skills in Python, proficiency in data processing frameworks like Apache Spark, knowledge of LLMs foundational models, experience with event-driven architectures, familiarity with Azure DevOps and other LLMOps tools, expertise in Azure OpenAI Service and vector DBs, containerization technologies like Kubernetes and Docker, comprehension of data lakes and data management strategies, proficiency in cloud computing platforms like Azure or AWS, exceptional leadership, problem-solving, and analytical abilities, superior communication and collaboration skills, and the ability to operate effectively in a dynamic environment. Nice-to-have skills include experience with technologies like Datadog and Splunk, possession of relevant solution architecture certificates, and continuous professional development in data engineering and GenAI. A professional and educational background in any graduate / BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA is required for this role.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

The role of a software engineer in Corporate Planning and Management (CPM) involves providing engineering solutions that facilitate budget planning, financial forecasting, expense allocation, spend management, third-party risk assessment, and aiding in corporate decision-making aligned with strategic goals. As a member of the CPM Engineering team, you will play a pivotal role in creating and enhancing financial and spend management workflows, as well as developing intelligent reporting mechanisms to drive commercial benefits for the organization. In this dynamic role, you will have the opportunity to combine your software engineering expertise with an understanding of finance, thereby contributing to the enhancement of corporate planning and management processes. You will work in agile, collaborative teams to explore innovative solutions that cater to the fast-paced market requirements. Responsibilities: - Demonstrate self-motivation and establish enduring relationships with clients and peers - Approach problem-solving with an open mindset within a team environment - Utilize exceptional analytical skills to devise creative and commercially viable solutions - Exhibit a strong willingness to learn and actively contribute to team initiatives - Thrive in fast-paced, ambiguous work settings, managing multiple tasks efficiently - Deliver advanced financial products to clients through digital platforms - Engage with a globally distributed, cross-functional team to develop customer-centric products - Evaluate existing software systems for enhancement opportunities and provide estimates for new features - Maintain comprehensive documentation for team processes, best practices, and software guidelines Basic Qualifications: - Minimum of 5 years of relevant professional experience - Bachelor's degree or higher in Computer Science or equivalent field - Proficiency in writing Java APIs with over 3 years of experience - Expertise in React JS, HTML5, and Java programming languages - Excellent written and verbal communication abilities - Capability to establish strong relationships with product leaders and senior stakeholders - Experience in building transactional systems and solid understanding of software architecture - Familiarity with integrating Restful web services - Comfortable working in agile operating models Preferred Qualifications: - Previous experience with microservice architecture - Proficiency in React JS - Familiarity with Apache Spark, Hadoop, Hive, and Spring Boot technologies Join us in this exciting role that offers the perfect blend of software engineering and financial acumen, contributing to the strategic objectives of the organization.,

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Noida

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

1 - 3 Lacs

Kolkata, Chennai, Bengaluru

Hybrid

Location- Pune, Mumbai, Nagpur, Goa, Noida, Gurgaon, Ahmedabad, Jaipur, Indore, Kolkata, Kochi, Hyderabad, Bangalore, Chennai,) Experience: 5-7 years Notice: 0-15 days Open position: 6 JD: Proven experience with DataStage for ETL development. Strong understanding of data warehousing concepts and best practices. Hands-on experience with Apache Airflow for workflow management. Proficiency in SQL and Python for data manipulation and scripting. Solid knowledge of Unix/Linux shell scripting. Experience with Apache Spark and Databricks for big data processing. Expertise in Snowflake for cloud data warehousing. Familiarity with version control systems (e.g., Git) and CI/CD pipelines. Excellent problem-solving and communication skills.

Posted 2 weeks ago

Apply

5.0 - 15.0 years

0 Lacs

noida, uttar pradesh

On-site

HCLTech is seeking a Data and AI Principal / Senior Manager (Generative AI) for their Noida location. As a global technology company with a workforce of over 218,000 employees in 59 countries, HCLTech specializes in digital, engineering, cloud, and AI solutions. The company collaborates with clients across various industries such as Financial Services, Manufacturing, Life Sciences, Healthcare, Technology, Telecom, Retail, and Public Services, offering innovative technology services and products. With consolidated revenues of $13.7 billion as of the 12 months ending September 2024, HCLTech aims to drive progress and transformation for its clients globally. Key Responsibilities: In this role, you will be responsible for providing hands-on technical leadership and oversight, including leading the design of AI and GenAI solutions, machine learning pipelines, and data architectures. You will actively contribute to coding, solution design, and troubleshooting critical components, collaborating with Account Teams, Client Partners, and Domain SMEs to ensure technical solutions align with business needs. Additionally, you will mentor and guide engineers across various functions to foster a collaborative and high-performance team environment. As part of the role, you will design and implement system and API architectures, integrating microservices, RESTful APIs, cloud-based services, and machine learning models seamlessly into GenAI and data platforms. You will lead the integration of AI, GenAI, and Agentic applications, NLP models, and large language models into scalable production systems. You will also architect ETL pipelines, data lakes, and data warehouses using tools like Apache Spark, Airflow, and Google BigQuery, and drive deployment using cloud platforms such as AWS, Azure, and GCP. Furthermore, you will lead the design and deployment of machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn, ensuring accurate and reliable outputs. You will develop prompt engineering techniques for GenAI models and implement best practices for ML model performance monitoring and continuous training. The role also involves expertise in CI/CD pipelines, Infrastructure-as-Code, cloud management, stakeholder communication, agile development, performance optimization, and scalability strategies. Required Qualifications: - 15+ years of hands-on technical experience in software engineering, with at least 5+ years in a leadership role managing cross-functional teams in AI, GenAI, machine learning, data engineering, and cloud infrastructure. - Proficiency in Python and experience with Flask, Django, or FastAPI for API development. - Extensive experience in building and deploying ML models using TensorFlow, PyTorch, scikit-learn, and spaCy, and integrating them into AI frameworks. - Familiarity with ETL pipelines, data lakes, data warehouses, and data processing tools like Apache Spark, Airflow, and Kafka. - Strong expertise in CI/CD pipelines, containerization, Infrastructure-as-Code, and API security for high-traffic systems. If you are interested in this position, please share your profile with the required details including Overall Experience, Skills, Current and Preferred Location, Current and Expected CTC, and Notice Period to paridhnya_dhawankar@hcltech.com.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

telangana

On-site

As the Vice President of Engineering at Teradata in India, you will be responsible for leading the software development organization for the AI Platform Group. This includes overseeing the execution of the product roadmap for key technologies such as Vector Store, Agent platform, Apps, user experience, and AI/ML-driven use-cases. Your success in this role will be measured by your ability to build a world-class engineering culture, attract and retain technical talent, accelerate product delivery, and drive innovation that brings tangible value to customers. In this role, you will lead a team of over 150 engineers with a focus on helping customers achieve outcomes with Data and AI. Collaboration with key functions such as Product Management, Product Operations, Security, Customer Success, and Executive Leadership will be essential to your success. You will also lead a regional team of up to 500 individuals, including software development, cloud engineering, DevOps, engineering operations, and architecture teams. Collaboration with various stakeholders at regional and global levels will be a key aspect of your role. To be considered a qualified candidate for this position, you should have at least 10 years of senior leadership experience in product development or engineering within enterprise software product companies. Additionally, you should have a minimum of 3 years of experience in a VP Product or equivalent role managing large-scale technical teams in a growth market. You must have a proven track record of leading agentic AI development and scaling AI in a hybrid cloud environment, as well as experience with Agile and DevSecOps methodologies. Your background should include expertise in cloud platforms, data harmonization, data analytics for AI, Kubernetes, containerization, and microservices-based architectures. Experience in delivering SaaS-based data and analytics platforms, modern data stack technologies, AI/ML infrastructure, enterprise security, and performance engineering is also crucial. A passion for open-source collaboration, building high-performing engineering cultures, and inclusive leadership is highly valued. Ideally, you should hold a Master's degree in engineering, Computer Science, or an MBA. At Teradata, we prioritize a people-first culture, offer a flexible work model, focus on well-being, and are committed to Diversity, Equity, and Inclusion. Join us in our mission to empower our customers and drive innovation in the world of AI and data analytics.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As an Expert Software Engineer Java at SAP, you will play a critical role in leading strategic initiatives within the App2App Integration team in SAP Business Data Cloud. Your primary responsibility will be to accelerate the development and adoption of seamless, low-latency integration patterns across SAP applications and the BDC data fabric. Your expertise in Java, ETL, distributed data processing, Kafka, cloud-native development, and DevOps will be essential in driving architectural direction, overseeing key integration frameworks, and providing hands-on leadership to build real-time, event-driven, and secure communication solutions across a distributed enterprise landscape. In this role, you will collaborate closely with stakeholders across SAP's data platform initiatives, guiding the evolution of reusable integration patterns, automation practices, and platform consistency while mentoring teams, conducting code reviews, and contributing to team-level architectural decisions. Your responsibilities will include leading and designing App2App integration components and services using Java, RESTful APIs, and messaging frameworks such as Apache Kafka. You will architect and build scalable ETL and data transformation pipelines for both real-time and batch processing needs, integrating data workflows with platforms like Databricks, Apache Spark, or other modern data engineering tools. You will drive the evolution of reusable integration patterns, automation practices, and platform consistency across services, architect and build distributed data processing pipelines, support large-scale data ingestion, transformation, and routing, guide the DevOps strategy to define and improve CI/CD pipelines, monitoring, and deployment strategies using modern GitOps practices, and guide cloud-native secure deployment of services on SAP BTP and major Hyperscalers (AWS, Azure, GCP). Additionally, you will collaborate with SAP's broader Data Platform efforts, mentor junior developers, and contribute to team-level architectural and technical decisions. To be successful in this role, you should hold a Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field, with 10+ years of hands-on experience in backend development using Java, strong object-oriented design skills, and integration patterns expertise. Proven experience in designing and building ETL pipelines, large-scale data processing frameworks, and familiarity with platforms like Databricks, Spark, or other data engineering tools is highly desirable. Proficiency in SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud, or HANA, designing CI/CD pipelines, containerization, Kubernetes, DevOps best practices, Hyperscaler environments (AWS, Azure, GCP), and driving engineering excellence within complex enterprise systems are key qualifications that you should bring to this role. Join us at SAP, where our culture of inclusion, focus on health and well-being, and flexible working models ensure that everyone, regardless of background, feels included and can perform at their best. We believe in unleashing all talent, investing in our employees, and creating a better, more equitable world. SAP is an equal opportunity workplace and an affirmative action employer, committed to Equal Employment Opportunity and providing accessibility accommodations to applicants with physical and/or mental disabilities. If you are ready to bring out your best and contribute to SAP's mission of helping the world run better, we encourage you to apply for this exciting opportunity.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

The Lead Data Modeler role involves developing high performance, scalable enterprise data models on a cloud platform. You must possess strong SQL skills along with excellent data modeling expertise, and be well-versed in the Kimball methodology. Your responsibilities include participating in various activities throughout the systems development lifecycle, supporting activities, engaging in POCs, and presenting outcomes effectively. Additionally, you will be responsible for analyzing, architecting, designing, programming, debugging both existing and new products, as well as mentoring team members. It is crucial to take ownership and demonstrate high professional and technical ethics with a consistent focus on emerging technologies beneficial for the organization. You should have over 10 years of work experience in data modeling or engineering. Your duties will involve defining, designing, and implementing enterprise data models, building Kimball-compliant data models in the Analytic layer of the data warehouse, and constructing 3rd normal form-compliant data models in the hub layer of the data warehouse. You must translate tactical/strategic requirements into effective solutions that align with business needs. The role also requires participation in complex initiatives, seeking help when necessary, reviewing specifications, coaching team members, and researching coding standards improvements. Technical skills include hands-on experience in SQL, query optimization, RDBMS, Data Warehouse (ER and Dimensional modeling), modeling data into star schemas using the Kimball methodology, Agile methodology, CICD frameworks, DevOps practices, and working in an onsite-offshore model. Soft skills such as leadership, analytical thinking, problem-solving, communication, and presentation skills are essential. You should be able to work with a diverse team, make decisions, guide team members through complex problems, and effectively communicate with leadership and business teams. A Bachelor's degree in Computer Science, Information Systems, or a related technical area is required, preferably B.E in Computer Science/Information Tech. Nice-to-have skills include experience with Apache Spark Python, graph databases, data identification, ingestion, transformation, and consumption, data visualization, SAP Enterprise S/4 HANA familiarity, programming language skills (Python, NodeJs, Unix Scripting), and experience in GCP Cloud Ecosystem. Experience in software engineering across all deliverables, including defining, architecting, building, testing, and deploying, is preferred. The Lead Data Modeler role does not offer relocation assistance and does not specify a particular work shift.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As the Product Director at Rakuten Symphony, you will have the opportunity to lead a team of dedicated engineers in the creation of cutting-edge software solutions that cater to the unique requirements of our clients. Leveraging your expertise in Apache Spark, NoSQL databases, relational databases, Spring Framework, and cloud technologies, you will play a pivotal role in driving innovation and tackling intricate technical obstacles. This position necessitates exceptional leadership capabilities, a strategic outlook, and an unwavering commitment to delivering top-notch results. To excel in this role, you will be responsible for overseeing the entire product lifecycle by collaborating closely with sales, marketing, engineering teams, and senior management. Interacting with customers to grasp product use cases, customer journeys, business relevance, and the impact of each use case will be a key aspect of your responsibilities. In addition, you will need to possess a deep understanding of Rakuten Symphony's technology platforms and features, enabling you to collaborate effectively with solution architects and engineering leads to craft comprehensive product requirement documents and deliverables. Your role will also entail making informed and decisive product decisions that drive business growth and establish a competitive edge against industry rivals. By synthesizing customer feedback and internal innovation initiatives, you will be instrumental in formulating product roadmaps that align with market demands. Furthermore, you will be expected to lead product team processes through Agile/Scrum methodologies, manage stakeholder expectations, and monitor roadmap progress regularly. As a seasoned professional in the field of product management, you should hold a Bachelor's degree in engineering, computer science, or a related field, coupled with a minimum of 15 years of experience in product management roles, with at least 6-7 years dedicated to enterprise product management. Thriving in a dynamic, high-energy work environment and possessing a keen intuition for exceptional customer experiences are prerequisites for this position. Strong communication skills, both oral and written, are essential for effectively conveying product strategies and plans to diverse stakeholders and teams. In alignment with the Rakuten Shugi Principles of Success, you are expected to embody the core behaviors that define Rakuten's global identity: - Embrace continuous improvement and unceasing progress (Kaizen). - Exhibit a fervent dedication to professionalism and excellence in your endeavors. - Embrace the Rakuten Cycle of hypothesizing, practicing, and validating to navigate uncharted territories successfully. - Prioritize customer satisfaction by delivering innovative and user-friendly products. - Uphold a sense of urgency and efficiency in all endeavors, emphasizing the importance of time management and goal-setting.,

Posted 2 weeks ago

Apply

4.0 - 9.0 years

15 - 20 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

We’re hiring a Scala Developer with 4+ years of experience in building scalable, high-performance backend systems. Strong in functional programming, backend services, distributed systems, and cloud environments.

Posted 2 weeks ago

Apply

13.0 - 20.0 years

30 - 45 Lacs

Pune

Hybrid

Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring DATA ENGINEERING - Solution Architect for one of our leading MNC client. PFB the details for your better understanding : 1. WORK LOCATION : PUNE 2. Job Role: DATA ENGINEERING - Solution Architect 3. EXPERIENCE : 13+ yrs 4. CTC Range: Rs. 35 LPA to Rs. 50 LPA 5. Work Type : WFO Hybrid ****** Looking for SHORT JOINERS ****** Job Description : Who are we looking for : Architectural Vision & Strategy: Define and articulate the technical vision, strategy and roadmap for Big Data, data streaming, and NoSQL solutions , aligning with overall enterprise architecture and business goals. Required Skills : 13+ years of progressive EXP in software development, data engineering and solution architecture roles, with a strong focus on large-scale distributed systems. Expertise in Big Data Technologies: Apache Spark: Deep expertise in Spark architecture, Spark SQL, Spark Streaming, performance tuning, and optimization techniques. Experience with data processing paradigms (batch and real-time). Hadoop Ecosystem: Strong understanding of HDFS, YARN, Hive and other related Hadoop components . Real-time Data Streaming: Apache Kafka: Expert-level knowledge of Kafka architecture, topics, partitions, producers, consumers, Kafka Streams, KSQL, and best practices for high-throughput, low-latency data pipelines. NoSQL Databases: Couchbase: In-depth experience with Couchbase OR MongoDB OR Cassandra), including data modeling, indexing, querying (N1QL), replication, scaling, and operational best practices. API Design & Development: Extensive experience in designing and implementing robust, scalable and secure APIs (RESTful, GraphQL) for data access and integration. Programming & Code Review: Hands-on coding proficiency in at least one relevant language ( Python, Scala, Java ) with a preference for Python and/or Scala for data engineering tasks. Proven experience in leading and performing code reviews, ensuring code quality, performance, and adherence to architectural guidelines. Cloud Platforms: Extensive EXP in designing and implementing solutions on at least one major cloud platform ( AWS, Azure, GCP ), leveraging their Big Data, streaming, and compute services . Database Fundamentals: Solid understanding of relational database concepts, SQL, and data warehousing principles. System Design & Architecture Patterns: Deep knowledge of various architectural patterns (e.g., Microservices, Event-Driven Architecture, Lambda/Kappa Architecture, Data Mesh ) and their application in data solutions. DevOps & CI/CD: Familiarity with DevOps principles, CI/CD pipelines, infrastructure as code (IaC) and automated deployment strategies for data platforms . ****** Looking for SHORT JOINERS ****** Interested, don't hesitate to call NAK @ 9840035825 / 9244912300 for IMMEDIATE response. Best, ANANTH | GSN | Google review : https://g.co/kgs/UAsF9W

Posted 2 weeks ago

Apply

4.0 - 9.0 years

8 - 13 Lacs

Pune, Anywhere in /Multiple Locations

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks.- Architect scalable and efficient data models and storage solutions on the Databricks platform.- Collaborate with architects and other teams to migrate current solution to use Databricks.- Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements.- Use best practices for data governance, security, and compliance on the Databricks platform.- Mentor junior engineers and provide technical guidance.- Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field.- 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform.- Proficiency in programming languages such as Python, Scala, or SQL.- Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark.- Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services.- Proven track record of delivering scalable and reliable data solutions in a fast-paced environment.- Excellent problem-solving skills and attention to detail.- Strong communication and collaboration skills with the ability to work effectively in cross-functional teams.- Good to have experience with containerization technologies such as Docker and Kubernetes.- Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Ahmedabad

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

7 - 17 Lacs

Noida, Greater Noida

Work from Office

About CLOUDKEEPER: CloudKeeper is a cloud cost optimization partner that combines the power of group buying & commitments management, expert cloud consulting & support, and an enhanced visibility & analytics platform to reduce cloud cost & help businesses maximize the value from AWS, Microsoft Azure, & Google Cloud. A certified AWS Premier Partner, Azure Technology Consulting Partner, Google,Cloud Partner, and FinOps Foundation Premier Member, CloudKeeper has helped 400+ global companies save an average of 20% on their cloud bills, modernize their cloud set-up and maximize value all while maintaining flexibility and avoiding any long-term commitments or cost. CloudKeeper hived off from TO THE NEW, digital technology services company with 2500+ employees and an 8-time GPTW winner. Position Overview: We are looking for an experienced and driven Data Engineer to join our team. The ideal candidate will have a strong foundation in big data technologies, particularly Spark, and a basic understanding of Scala to design and implement efficient data pipelines. As a Data Engineer at CloudKeeper, you will be responsible for building and maintaining robust data infrastructure, integrating large datasets, and ensuring seamless data flow for analytical and operational purposes. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to collect, process, and store data from various sources. Work with Apache Spark to process large datasets in a distributed environment, ensuring optimal performance and scalability. Develop and optimize Spark jobs and data transformations using Scala for large-scale data processing. Collaborate with data analysts and other stakeholders to ensure data pipelines meet business and technical requirements. Integrate data from different sources (databases, APIs, cloud storage, etc.) into a unified data platform. Ensure data quality, consistency, and accuracy by building robust data validation and cleansing mechanisms. Use cloud platforms (AWS, Azure, or GCP) to deploy and manage data processing and storage solutions. Automate data workflows and tasks using appropriate tools and frameworks. Monitor and troubleshoot data pipeline performance, optimizing for efficiency and cost-effectiveness. Implement data security best practices, ensuring data privacy and compliance with industry standards. Stay updated with new data engineering tools and technologies to continuously improve the data infrastructure. Required Qualifications: 4- 6 years of experience required as a Data Engineer or an equivalent role Strong experience working with Apache Spark with Scala for distributed data processing and big data handling. Basic knowledge of Python and its application in Spark for writing efficient data transformations and processing jobs. Proficiency in SQL for querying and manipulating large datasets. Experience with cloud data platforms, preferably AWS (e.g., S3, EC2, EMR, Redshift) or other cloud-based solutions. Strong knowledge of data modeling, ETL processes, and data pipeline orchestration. Familiarity with containerization (Docker) and cloud-native tools for deploying data solutions. Knowledge of data warehousing concepts and experience with tools like AWS Redshift, Google BigQuery, or Snowflake is a plus. Experience with version control systems such as Git. Strong problem-solving abilities and a proactive approach to resolving technical challenges. Excellent communication skills and the ability to work collaboratively within cross-functional teams. Preferred Qualifications: Experience with additional programming languages like Python, Java, or Scala for data engineering tasks. Familiarity with orchestration tools like Apache Airflow, Luigi, or similar frameworks. Basic understanding of data governance, security practices, and compliance regulations.

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

chennai, tamil nadu

On-site

The Applications Development Group Manager is a senior management level position responsible for accomplishing results through the management of a team or department in an effort to establish and implement new or revised application systems and programs in coordination with the Technology Team. The overall objective of this role is to drive applications systems analysis and programming activities. Responsibilities: Manage multiple teams of professionals to accomplish established goals and conduct personnel duties for team (e.g. performance evaluations, hiring and disciplinary actions) Provide strategic influence and exercise control over resources, budget management and planning while monitoring end results Utilize in-depth knowledge of concepts and procedures within own area and basic knowledge of other areas to resolve issues Ensure essential procedures are followed and contribute to defining standards Integrate in-depth knowledge of applications development with overall technology function to achieve established goals Provide evaluative judgement based on analysis of facts in complicated, unique, and dynamic situations including drawing from internal and external sources Influence and negotiate with senior leaders across functions, as well as communicate with external parties as necessary Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency, as well as effectively supervise the activity of others and create accountability with those who fail to maintain these standards. Qualifications: 10+ years of relevant experience Experience in applications development Experience in management Experience managing global technology teams Working knowledge of industry practices and standards Consistently demonstrates clear and concise written and verbal communication Education: Bachelors degree/University degree or equivalent experience Masters degree preferred Required Skills (Essential): Programming skills including concurrent, parallel and distributed systems programming Expert level knowledge of Java Expert level experience with HTTP, ReSTful web services and API design Messaging technologies (Kafka) Experience with Bigdata technologies Developer Hadoop, Apache Spark, Python, PySpark Experience with Reactive Streams Desirable Skills: Messaging technologies Familiarity with Hadoop SQL interfaces like Hive, Spark SQL, etc. Experience with Kubernetes Good understanding of the Linux OS Experience with Gradle, Maven would be beneficial If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity, review Accessibility at Citi. View Citi's EEO Policy Statement and the Know Your Rights poster.,

Posted 2 weeks ago

Apply

5.0 - 15.0 years

0 Lacs

noida, uttar pradesh

On-site

HCLTech is looking for a Data and AI Principal / Senior Manager (Generative AI) to join their team in Noida. As a global technology company with a strong presence in 59 countries and over 218,000 employees, HCLTech is a leader in digital, engineering, cloud, and AI services. They collaborate with clients in various industries such as Financial Services, Manufacturing, Life Sciences, Healthcare, Technology, Telecom, Media, Retail, and Public Services. With consolidated revenues of $13.7 billion, HCLTech aims to provide industry-leading capabilities to drive progress for their clients. In this role, you will be responsible for providing hands-on technical leadership and oversight. This includes leading the design of AI, GenAI solutions, machine learning pipelines, and data architectures to ensure performance, scalability, and resilience. You will actively contribute to coding, code reviews, and solution design, while working closely with Account Teams, Client Partners, and Domain SMEs to align technical solutions with business needs. Mentoring and guiding engineers across various functions will be an essential aspect of this role, fostering a collaborative and high-performance team environment. Your role will also involve designing and implementing system and API architectures, integrating AI, GenAI, and Agentic applications into production systems, and architecting ETL pipelines, data lakes, and data warehouses using industry-leading tools. You will drive the deployment and scaling of solutions using cloud platforms like AWS, Azure, and GCP, while leading the integration of machine learning models into end-to-end production workflows. Additionally, you will be responsible for leading CI/CD pipeline efforts, infrastructure automation, and ensuring robust integration with cloud platforms. Stakeholder communication, promoting Agile methodologies, and optimizing performance and scalability of applications will be key responsibilities. The ideal candidate will have at least 15 years of hands-on technical experience in software engineering, with a focus on AI, GenAI, machine learning, data engineering, and cloud infrastructure. If you meet the qualifications and are passionate about driving innovation in AI and data technologies, we invite you to share your profile with us. Kindly email your details to paridhnya_dhawankar@hcltech.com including your overall experience, skills, current and preferred location, current and expected CTC, and notice period. We look forward to hearing from you and exploring the opportunity to work together at HCLTech.,

Posted 2 weeks ago

Apply

5.0 - 23.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate for the role of People Analytics professional should have a strong background in transforming data into actionable insights to drive evidence-based HR decision-making. You will be responsible for designing, developing, and managing advanced dashboards and data visualizations using tools such as Tableau, Power BI, and other modern BI platforms. Building strong partnerships with key stakeholders across HR and the business is essential to deeply understand their challenges and translate their needs into actionable data solutions. In this role, you will need to develop and implement statistical models and machine learning solutions for HR analytics, while managing end-to-end data workflows including extraction, transformation, and loading (ETL). You will be required to design and deliver regular and ad-hoc reports on key HR metrics, ensuring data accuracy through thorough testing and quality checks. The successful candidate should have a Bachelor's degree in a related field, with a minimum of 5 years of experience in analytics, including specialization in people analytics and HR data analysis. Strong proficiency in RStudio/Python, SQL, data visualization tools such as Power BI or Tableau, machine learning, statistical analysis, and cloud platforms is required. Hands-on experience working with Oracle Cloud HCM data structures and reporting tools is highly desirable. You should bring strong problem-solving skills, effective communication abilities to convey data insights through compelling storytelling, and experience managing multiple projects independently in fast-paced, deadline-driven environments. An entrepreneurial mindset and leadership experience are key to successfully leading high-visibility analytics projects and driving collaboration across teams and departments. As a member of the Global People Analytics team, you will collaborate with key stakeholders within Talent Management, Talent Acquisition, Total Rewards, HR Services, and HR Information Systems to drive data-driven decision-making across the organization. This role offers an exciting opportunity to shape the future of people analytics, leverage advanced technologies, and contribute to high-impact, strategic HR initiatives.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

The role of a software engineer in Corporate Planning and Management (CPM) involves providing engineering solutions to facilitate budget planning, financial forecasting, expense allocation, spend management, third-party risk assessment, and supporting corporate decision-making aligned with strategic objectives. As a software engineer in CPM Engineering, you will have the opportunity to contribute to the development and transformation of financial and spend management workflows, as well as the creation of intelligent reporting systems to drive commercial benefits for the firm. Working in small, agile teams, you will be at the forefront of impacting various aspects of corporate planning and management in a fast-paced environment. To excel in this role, you should possess the following qualities: - Demonstrate energy, self-direction, and motivation, while fostering long-term relationships with clients and colleagues. - Approach problem-solving collaboratively within a team setting. - Showcase exceptional analytical skills to deliver creative and commercially viable solutions through informed decision-making. - Exhibit a strong willingness to learn and actively contribute innovative ideas to the team. - Thrive in dynamic work environments, displaying independence and adaptability. - Efficiently manage multiple tasks, demonstrating sound judgment in prioritization. - Offer advanced financial products digitally to clients. - Engage with a diverse, globally distributed cross-functional team to develop customer-centric products. - Evaluate existing software systems for enhancement opportunities and provide estimates for new feature implementations. - Maintain and update documentation related to team processes, best practices, and software runbooks. Basic Qualifications: - Minimum of 5 years of relevant professional experience. - Bachelor's degree or higher in Computer Science or equivalent field. - 3+ years of experience in Java API development. - Proficiency in React JS, HTML5, and Java. - Strong written and verbal communication skills. - Ability to establish trusted partnerships with product leaders and executive stakeholders. - Hands-on experience in building transactional systems and a solid understanding of software architecture. - Familiarity with integrating Restful web services. - Comfortable working in agile operating environments. Preferred Qualifications: - Knowledge of microservices architecture. - Proficiency in React JS. - Experience with Apache Spark, Hadoop, Hive, and Spring Boot frameworks.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

bhubaneswar

On-site

The software development lead plays a crucial role in developing and configuring software systems, whether it is for the entire product lifecycle or for specific stages. As a Software Development Lead, your main responsibilities include collaborating with different teams to ensure that the software meets client requirements, applying your expertise in technologies and methodologies to effectively support projects, and overseeing the implementation of solutions that improve operational efficiency and product quality. You are expected to act as a subject matter expert (SME) and manage the team to deliver high-quality results. Your role involves making team decisions, engaging with multiple teams to contribute to key decisions, providing solutions to problems for your team and others, and facilitating knowledge sharing sessions to enhance team capabilities. Additionally, you will monitor project progress to ensure alignment with strategic goals. In terms of professional and technical skills, proficiency in AWS BigData is a must. You should have a strong understanding of data processing frameworks like Apache Hadoop and Apache Spark, experience in cloud services and architecture (especially in AWS environments), familiarity with data warehousing solutions and ETL processes, and the ability to implement data security and compliance measures. Candidates applying for this role should have a minimum of 5 years of experience in AWS BigData. The position is based at our Bhubaneswar office, and a 15 years full-time education is required to be eligible for this role.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You are an experienced Python + Databricks Developer who will be a valuable addition to our data engineering team. Your expertise in Python programming, data processing, and hands-on experience with Databricks will be instrumental in building and optimizing data pipelines. Your key responsibilities will include designing, developing, and maintaining scalable data pipelines using Databricks and Apache Spark. You will be expected to write efficient Python code for data transformation, cleansing, and analytics. Collaboration with data scientists, analysts, and engineers is essential to understand data needs and deliver high-performance solutions. Optimizing and tuning data pipelines for performance and cost efficiency, implementing data validation, quality checks, and monitoring, as well as working with cloud platforms (preferably Azure or AWS) to manage data workflows are crucial aspects of the role. Ensuring best practices in code quality, version control, and documentation will also be part of your responsibilities. To be successful in this role, you should have 5+ years of professional experience in Python development and at least 3 years of hands-on experience with Databricks, including notebooks, clusters, Delta Lake, and job orchestration. Strong experience with Spark, especially PySpark, is required. Proficiency in working with large-scale data processing and ETL/ELT pipelines, solid understanding of data warehousing concepts and SQL, as well as experience with Azure Data Factory, AWS Glue, or other data orchestration tools will be beneficial. Familiarity with version control tools like Git and excellent problem-solving and communication skills are also essential. If you are looking to leverage your Python and Databricks expertise to contribute to building robust data pipelines and optimizing data workflows, this role is a great fit for you.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies