Jobs
Interviews

18226 Spark Jobs - Page 38

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Lead Techno Functional Architect for Blue Yonder Warehouse Management Systems (SaaS / On Premise) implementation/Optimization, your core responsibility would be to focus on designing best-in-class solutions using a suite of products in the Warehouse Management space. You will be responsible for driving the design, optimization, and implementation of Blue Yonder Warehouse Management software solutions for customers across all verticals. Your role will involve being the techno functional liaison across key stakeholders, including key customer contacts, Project Managers, Technical Consultants, and 3rd Party Consultants, to ensure the timely and successful delivery of the final solution. You will work with a current technical environment that includes software such as Java, Springboot, Gradle, GIT, Hibernate, Rest API, OAuth, and application architecture that is scalable, resilient, event-driven, and secure multi-tenant Microservices architecture. Additionally, you will be involved in cloud architecture using MS Azure and various frameworks such as Kubernetes, Kafka, Elasticsearch, Spark, NOSQL, RDBMS, Springboot, Gradle, GIT, Ignite. Your responsibilities will include leading solution design throughout all phases of the project life cycle, integrating systems with other platforms using API solutions, conducting system integration and user acceptance testing, and ensuring a seamless transition to steady-state associates. You will also be expected to assess business needs, identify revenue opportunities, communicate customer-specific needs to the Product team, and provide necessary data visibility based on customer requirements. The ideal candidate should have a minimum of 5+ years of experience in the implementation of Warehouse Management Systems at Tier 1 warehouses and possess exceptional communication, organization, and analytical skills. You should demonstrate a strong ability to problem-solve, develop innovative solutions, work in a fast-paced environment, and adapt to changing priorities. Strong business analysis skills, the ability to prepare and deliver presentations, and excellent interpersonal skills are also required. An advanced degree in Industrial Engineering, Supply Chain, Operations, Logistics, or a related field is preferred, although a Bachelor's degree with relevant industry experience may also be considered. If you align with our values and have a desire to contribute to a diverse and inclusive environment, we welcome you to explore this opportunity to join our team as a Lead Techno Functional Architect for Blue Yonder Warehouse Management Systems.,

Posted 1 week ago

Apply

6.0 years

0 Lacs

Greater Kolkata Area

On-site

Role Overview We are seeking a highly analytical and detail-oriented Data Analysis Engineer to lead the development of our Recommendation Engine . This role is central to building intelligent, personalized suggestions that help users navigate lifestyle and health decisions. You will work at the intersection of data science, software engineering, and product development, leveraging multi-source data—metabolic, behavioral, and contextual—to craft meaningful and evidence-based user experiences. Key Responsibilities Architect and develop a robust, scalable recommendation system using structured and unstructured data. Design data pipelines and models to collect, process, and analyse behavioural, demographic, and physiological inputs. Apply machine learning and statistical techniques to predict user preferences and outcomes. Work closely with data engineers, product managers, and medical advisors to refine personalisation logic. Continuously evaluate and tune algorithms for accuracy, diversity, and user satisfaction. Design and monitor performance metrics (CTR, precision/recall, RMSE, etc.) to assess and improve model quality. Translate business questions into data queries and actionable insights. Document model assumptions, architecture, and validation rigorously to meet audit and compliance standards. Required Qualifications Bachelor’s or Master’s in Computer Science, Statistics, Data Science, or related field. 3–6 years of hands-on experience building data products, ideally involving recommendation engines or personalisation systems. Strong programming skills in Python or Scala, with proficiency in libraries like Pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow. Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Proven ability to build and deploy models using ML frameworks and MLOps tools. Deep understanding of algorithms such as collaborative filtering, matrix factorization, embeddings, and content-based filtering. Exposure to real-time data ingestion systems like Kafka, Spark Streaming, or Flink is a plus. Preferred Skills Prior experience in healthtech, wellness, or behavioral analytics domains. Knowledge of A/B testing, causal inference, or reinforcement learning. Experience with AWS, GCP, or Azure cloud platforms. Comfort working with multi-disciplinary teams and translating complex data into user-friendly insights. Why Join Us? Be part of a mission-driven company transforming preventive healthcare through intelligent personalization. Work with real-world data to improve lives, not just metrics.

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Company Description SPARK & Associates Chartered Accountants LLP is a reputed and progressive chartered accountancy firm in India, established in 1990. With a presence in 11 states and 13 cities, the firm assists clients in achieving their business and financial goals. Key locations include Mumbai, Delhi, Kota, Bhopal, Jamshedpur, Pune, Raipur, Bengaluru, Patna, Sangrur, Noida, and Bioara. The firm employs over 200 staff members, including Chartered Accountants, MBAs, and personnel with additional qualifications such as DISA, DITL, and DCL. Role Description This is a full-time on-site role for an Article Trainee at SPARK & Associates Chartered Accountants LLP, located in Mumbai. The Article Trainee will engage in daily tasks such as assisting with audits, preparing financial statements, conducting research, and ensuring compliance with accounting standards. The trainee will work closely with senior staff and participate in various accounting projects and assignments. Qualifications A strong foundation in Accounting and Auditing Proficiency in Financial Statement preparation and Analysis Research skills and attention to detail Excellent communication and interpersonal skills Ability to work effectively in a team and independently Currently pursuing or completed CA Intermediate Basic knowledge of relevant accounting software and MS Office

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Minimum 6 years of experience in data engineering and analytics Strong hands-on experience in Oracle Analytics Cloud (OAC) and OCI Big Data Platform Proficiency in Spark, PySpark, Hive, and SQL Deep understanding of data integration, ETL pipelines, and data modeling techniques Experience working with large-scale data systems and cloud-based architectures Familiarity with data security, access control, and compliance best practices Strong analytical and problem-solving skills Excellent communication and team collaboration abilities. Roles & Responsibilities Create robust database structures, schemas, and data models using Oracle Database technologies to support business intelligence and analytics initiatives. Develop, schedule, and monitor ETL workflows using Oracle Data Integrator (ODI) or PL/SQL scripts to extract, transform, and load data from various sources. Perform tuning of Oracle queries, stored procedures, and indexes to enhance the speed and efficiency of data processing and storage. Implement validation rules, data governance standards, and security protocols to maintain high-quality, compliant, and secure datasets. Work closely with data analysts, architects, and business units to understand requirements and deliver scalable, reliable Oracle data solutions. Integrate on-premise Oracle databases with cloud platforms like Oracle Cloud Infrastructure (OCI), AWS, or Azure for hybrid data solutions. Participate in Agile ceremonies and work closely with DevOps teams for CI/CD pipeline integration of data workflows. Ensure high availability and performance of Oracle databases through tuning and monitoring. Utilize tools like Oracle Golden Gate or Oracle Streams to enable real-time data replication and change tracking across systems. Design and implement long-term data archiving strategies using Oracle ILM (Information Lifecycle Management. Develop solutions for moving data between Oracle and other platforms (e.g., SQL Server, Snowflake, PostgreSQL) via DB links or APIs. Job Location : Pune (ref:hirist.tech)

Posted 1 week ago

Apply

15.0 - 19.0 years

0 Lacs

chennai, tamil nadu

On-site

As the Head of Data & Analytics at OneMagnify India, you will be responsible for leading and scaling the data function. You will serve as the strategic anchor for both onsite client engagements and offshore delivery. Your role will be pivotal in developing a Center of Excellence (CoE) for Data & Analytics, where you will mentor teams, build capabilities, and drive business growth through valuable analytics solutions. Additionally, you will collaborate with global and India leadership to shape the company's growth journey and identify cross-selling opportunities with key accounts. You bring a wealth of expertise in the entire data lifecycle and analytics field, with a focus on translating insights into tangible outcomes. Leading with influence, you excel in building high-performing and scalable delivery teams. As a strategic operator, you will mentor talent, shape practice direction, and support go-to-market activities. Your collaborative nature thrives in a cross-functional environment, and you strongly believe in India's potential as a capability and innovation hub. Furthermore, your command of AI/ML technologies will be instrumental in integrating advanced data science into marketing and business operations. In your role, you will lead the Data & Analytics practice in India, overseeing both onsite engagements and global in-house delivery. You will guide a team of over 50 onsite and offshore data professionals, promoting technical excellence, collaboration, and continuous improvement. Building, scaling, and optimizing a Center of Excellence for data engineering, reporting, business intelligence, and AI will be key responsibilities. You will act as the primary subject matter expert and escalation point for data delivery in India, providing thought leadership and overseeing delivery operations. Collaborating with global teams, you will identify and enable cross-sell and up-sell opportunities, support pre-sales activities, client pitches, and proposal development for data projects and analytics initiatives. Defining and standardizing best practices for data modeling, quality, automation, and agile delivery processes will be essential. You will also drive internal capability building in advanced analytics, visualization tools, and platforms, while representing OneMagnify India's data capabilities to clients, prospects, and global leadership. Staying updated on emerging AI technologies, data science techniques, and industry innovations will be crucial to driving differentiation in client delivery. To be successful in this role, you should have 15+ years of experience in Data & Analytics, with a proven track record of leading data practices or delivery teams. Your expertise should span BI, reporting, data engineering, and cloud platforms, and you should possess a strong understanding of data lifecycle, analytics frameworks, and agile development. Experience in working with onsite-offshore models in enterprise environments is essential, as is the ability to lead cross-functional teams, mentor talent, and influence executive stakeholders. An entrepreneurial mindset, excellent communication skills, and experience in leading data and analytics teams focused on marketing performance, customer insights, or personalization at scale are also required. Your track record of translating complex analytical findings into actionable business strategies and measurable ROI improvements will be key to your success in this role.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Intermediate Programmer Analyst position is an intermediate level role where you will participate in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your primary goal will be to contribute to applications systems analysis and programming activities. You should have hands-on experience in ETL and Big Data Testing, delivering high-quality solutions. Proficiency in Database & UI Testing using Automation tools is essential. Knowledge of Performance, Volume & Stress testing is required. You must have a strong understanding of SDLC / STLC processes, different types of manual Testing, and be well-versed in Agile methodology. Your responsibilities will include designing and executing test cases, authoring user stories, defect tracking, and aligning with business requirements. You should be open to learning and bringing new innovations in automation processes as per project needs. Managing complex tasks and teams, fostering a collaborative, growth-oriented environment through strong technical and analytical skills is a key aspect of this role. You will utilize your knowledge of applications development procedures, concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code. Familiarity with Test Management Tool - JIRA and Automation Tools such as Python, PySpark, Java, Spark, MySQL, Selenium, and Tosca is required. Experience with Hadoop / ABINTIO is considered a plus. In terms of testing, you will focus on ETL, Big Data, Database, and UI. Domain experience in Banking and Finance is preferred. You will consult with users, clients, and other technology groups on issues and recommend programming solutions, install, and support customer exposure systems. Qualifications: - 4-8 years of relevant experience in the Financial Service industry - Intermediate level experience in Applications Development role - Clear and concise written and verbal communication skills - Problem-solving and decision-making abilities - Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: - Bachelors degree/University degree or equivalent experience This job description provides a high-level overview of the work performed. Other job-related duties may be assigned as required.,

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation. Additionally, you can freely decide and change any time whether you accept cookies or choose to opt out of cookies to improve website's performance, as well as cookies used to display content tailored to your interests. Your experience of the site and the services we are able to offer may be impacted if you do not accept all cookies. Press Tab to Move to Skip to Content Link Skip to main content Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook Search by Keyword Search by Location Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook View Profile Employee Login Search by Keyword Search by Location Show More Options Loading... Requisition ID All Skills All Select How Often (in Days) To Receive An Alert: Create Alert Select How Often (in Days) To Receive An Alert: Apply now » Apply Now Start apply with LinkedIn Please wait... Tech Lead - Data Bricks Job Date: Aug 2, 2025 Job Requisition Id: 59586 Location: Hyderabad, TG, IN YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Data Bricks Professionals in the following areas : Experience 8+ Years Job Description Over all 8+ + Years experience and a Minimum 3+Years exp in Azure should have worked as lead for at least 3 year Should come from DWH background, Should have strong ETL experience Strong have strong hands on experience in Azure Data Bricks/Pyspark Strong have strong hands on experience inAzure Data Factory, Devops Strong knowledge on Bigdata stack Strong Knowledge of Azure EventHubs and Pub-Sub model, security Strong Communication and Analytical skills. Highly proficient at SQL development Experience working in an Agile environment Work as team lead to develop Cloud Data and Analytics solutions Mentor junior developers and testers Able to build strong relationships with client technical team Participate in the development of cloud data warehouses, data as a service, business intelligence solutions Data wrangling of heterogeneous data Coding complex Spark (Scala or Python). Required Behavioral Competencies Accountability : Takes responsibility for and ensures accuracy of own work, as well as the work and deadlines of the team. Collaboration : Shares information within team, participates in team activities, asks questions to understand other points of view. Agility : Demonstrates readiness for change, asking questions and determining how changes could impact own work. Customer Focus : Identifies trends and patterns emerging from customer preferences and works towards customizing/ refining existing services to exceed customer needs and expectations. Communication : Targets communications for the appropriate audience, clearly articulating and presenting his/her position or decision. Drives Results : Sets realistic stretch goals for self & others to achieve and exceed defined goals/targets. Resolves Conflict : Displays sensitivity in interactions and strives to understand others’ views and concerns. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Apply now » Apply Now Start apply with LinkedIn Please wait... Find Similar Jobs: Careers Home View All Jobs Top Jobs Quick Links Blogs Events Webinars Media Contact Contact Us Copyright © 2020. YASH Technologies. All Rights Reserved.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. We are a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hiring Hadoop Professionals in the following areas: **Position Title:** Data Engineer **SCOPE OF RESPONSIBILITY:** As part of a global, growing team of data engineers, you will collaborate in a DevOps model to enable Clients Life Science business with cutting-edge technology, leveraging data as an asset to support better decision-making. You will design, develop, test, and support automated end-to-end data pipelines and applications within Life Sciences data management and analytics platform (Palantir Foundry, Hadoop, and other components). This position requires proficiency in data engineering, distributed computation, and DevOps methodologies, utilizing AWS infrastructure and on-premises data centers to support multiple technology stacks. **PURPOSE OF THE POSITION:** The purpose of this role is to build and maintain data pipelines, develop applications on various platforms, and support data-driven decision-making processes across clients Life Science business. You will work closely with cross-functional teams, including business users, data scientists, and data analysts, while ensuring the best balance between technical feasibility and business requirements. **RESPONSIBILITIES:** - Develop data pipelines by ingesting various structured and unstructured data sources into Palantir Foundry. - Participate in end-to-end project lifecycles, from requirements analysis to deployment and operations. - Act as a business analyst for developing requirements related to Foundry pipelines. - Review code developed by other data engineers, ensuring adherence to platform standards and functional specifications. - Document technical work professionally and create high-quality technical documentation. - Balance technical feasibility with strict business requirements. - Deploy applications on Foundry platform infrastructure with clearly defined checks. - Implement changes and bug fixes following clients change management framework. - Work in DevOps project setups following Agile principles (e.g., Scrum). - Act as third-level support for critical applications, resolving complex incidents and debugging problems across the full stack. - Work closely with business users, data scientists, and analysts to design physical data models. - Provide support in designing ETL/ELT processes with databases and Hadoop platforms. **EDUCATION:** Bachelors degree or higher in Computer Science, Engineering, Mathematics, Physical Sciences, or related fields. **EXPERIENCE:** 5+ years of experience in system engineering or software development. 3+ years of experience in engineering with a focus on ETL work involving databases and Hadoop platforms. **TECHNICAL SKILLS:** - Hadoop General: Deep knowledge of distributed file system concepts, map-reduce principles, and distributed computing. Familiarity with Spark and its differences from MapReduce. - Data Management: Proficient in technical data management tasks such as reading, transforming, and storing data, including experience with XML/JSON and REST APIs. - Spark: Experience in launching Spark jobs in both client and cluster modes, with an understanding of property settings that impact performance. - Application Development: Familiarity with HTML, CSS, JavaScript, and basic visual design competencies. - SCC/Git: Experienced in using source code control systems like Git. - ETL/ELT: Experience developing ETL/ELT processes, including loading data from enterprise-level RDBMS systems (e.g., Oracle, DB2, MySQL). - Authorization: Basic understanding of user authorization, preferably with Apache Ranger. - Programming: Proficient in Python, with expertise in at least one high-level language (e.g., Java, C, Scala). Must have experience using REST APIs. - SQL: Expertise in SQL for manipulating database data, including views, functions, stored procedures, and exception handling. - AWS: General knowledge of the AWS stack (EC2, S3, EBS, etc.). - IT Process Compliance: Experience with SDLC processes, change control, and ITIL (incident, problem, and change management). **REQUIRED SKILLS:** - Strong problem-solving skills with an analytical mindset. - Excellent communication skills to collaborate with both technical and non-technical teams. - Experience working in Agile/DevOps teams, utilizing Scrum principles. - Ability to thrive in a fast-paced, dynamic environment while managing multiple tasks. - Strong organizational skills with attention to detail. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles - Flexible work arrangements, Free spirit, and emotional positivity; Agile self-determination, trust, transparency, and open collaboration; All Support needed for the realization of business goals; Stable employment with a great atmosphere and ethical corporate culture.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a Lead Software Engineer at the Loyalty Rewards and Segments Organization within Mastercard, you will play a crucial role in designing, developing, testing, and delivering software frameworks for use in large-scale distributed systems. In this position, you will lead the technical direction, architecture, design, and engineering practices to create cutting-edge solutions in event-driven architecture and zero trust. Your responsibilities will include prototyping new technologies, designing and developing software frameworks using best practices, writing efficient code, debugging and troubleshooting to improve performance, and collaborating with cross-functional teams to deliver high-quality services. You will balance competing interests with judgment and experience, identify synergies across teams, and drive process improvements and efficiency gains. To excel in this role, you must have deep hands-on experience in software engineering, particularly in architecture, design, and implementation of large-scale distributed systems. Expertise in event-driven architecture and knowledge of zero trust architecture are essential. Proficiency in Java, Scala, SQL, and building pipelines is required, along with experience in the Hadoop ecosystem, including tools like Hive, Pig, Spark, and cloud platforms. Your technical skills should also include expertise in web applications, web services, and tools such as Springboot, Angular, REST, OAuth, Sonar, Splunk, and Dynatrace. Familiarity with XP, TDD, BDD, secure coding standards, and vulnerability management is important. You should demonstrate strong problem-solving skills, experience in Agile environments, and excellent verbal and written communication. As a Lead Software Engineer, you will have the opportunity to mentor junior team members, demo features to product owners, and take development work from inception to implementation. Your passion for technology, continuous learning, and proactive approach to challenges will drive the team towards success. You will also be responsible for upholding Mastercard's security policies, ensuring information security, and reporting any violations or breaches. If you are a motivated, intellectually curious individual with a strong background in software design and development, this role offers a platform to work on innovative technologies and deliver solutions that meet the needs of Mastercard's customers. Join us in shaping the future of loyalty management solutions for banks, merchants, and Fintechs.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

noida, uttar pradesh

On-site

As an individual contributor at Adobe, you will play a pivotal role in driving innovation and shaping the future of digital experiences. Adobe is dedicated to fostering a work environment where every employee is valued, respected, and provided with equal opportunities. Your insights and ideas are highly valued as we strive to build exceptional products and services that cater to a global audience. The opportunity at hand involves working on a cutting-edge Component Content Management System that powers structured content for large enterprises. In this role, you will collaborate with cross-functional teams to identify key project interfaces and implement complex Machine Learning (ML) solutions using technologies such as Natural Language Processing (NLP), Generative AI, and Transformer architectures. Your responsibilities will include developing, evaluating, and deploying scalable ML models, monitoring their performance, and continuously enhancing Adobe's products and services. To excel in this role, you should possess a master's degree or equivalent experience in Machine Learning, with a minimum of 10 years of combined industry experience in ML, software engineering, and data engineering. Proficiency in Python, ML frameworks like PyTorch and TensorFlow, as well as strong programming skills in languages such as Python and Java, are essential. Additionally, you should have a solid understanding of ML Ops, data analysis, and mining using frameworks like Hadoop and Spark. Your ability to drive innovation, collaborate effectively, and adapt to an ever-evolving technological landscape will be crucial in this role. Strong problem-solving skills, excellent communication abilities, and a passion for staying abreast of the latest technological advancements are highly valued. In this dynamic and fast-paced environment, you will have the opportunity to contribute to the growth and success of Adobe while working alongside a diverse and inclusive team. If you are seeking a challenging yet rewarding opportunity to make a significant impact, Adobe is the place for you. Join us in creating exceptional digital experiences and pushing the boundaries of innovation. Explore the possibilities of a career at Adobe by updating your profile and applying for roles that align with your expertise and aspirations. Your journey towards professional growth and accomplishment starts here.,

Posted 1 week ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Description Want to participate in building the next generation of online payment system that supports multiple countries and payment methods? Amazon Payment Services (APS) is a leading payment service provider in MENA region with operations spanning across 8 countries and offers online payment services to thousands of merchants. APS team is building robust payment solution for driving the best payment experience on & off Amazon. Over 100 million customers send tens of billions of dollars moving at light-speed through our systems annually. We build systems that process payments at an unprecedented scale with accuracy, speed and mission-critical availability. We innovate to improve customer experience, with support for currency of choice, in-store payments, pay on delivery, credit and debit card payments, seller disbursements and gift cards. Many new exciting & challenging ideas are in the works. Key job responsibilities Data Engineers focus on managing data requests, maintaining operational excellence, and enhancing core infrastructure. You will be collaborating closely with both technical and non-technical teams to design and execute roadmaps Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

At Iron Mountain, we believe that work, when done well, can have a positive impact on our customers, employees, and the planet. That's why we are looking for smart and committed individuals to join our team. Whether you are starting your career or seeking a change, we invite you to explore how you can enhance the impact of your work at Iron Mountain. We offer expert and sustainable solutions in records and information management, digital transformation services, data centers, asset lifecycle management, and fine art storage, handling, and logistics. Collaborating with over 225,000 customers worldwide, we aim to preserve valuable artifacts, optimize inventory, and safeguard data privacy through innovative and socially responsible practices. If you are interested in being part of our growth journey and expanding your skills in a culture that values diverse contributions, let's have a conversation. As Iron Mountain progresses with its digital transformation, we are expanding our Enterprise Data Platform Team, which plays a crucial role in supporting data integration solutions, reporting, and analytics. The team focuses on maintaining and enhancing data platform components essential for delivering our data solutions. As a Data Platform Engineer at Iron Mountain, you will leverage your advanced knowledge of cloud big data technologies, software development expertise, and strong SQL skills. The ideal candidate will have a background in software development and big data engineering, with experience working in a remote environment and supporting both on-shore and off-shore engineering teams. Key Responsibilities: - Building and operationalizing cloud-based platform components - Developing production-quality ingestion pipelines with automated quality checks to centralize access to all data sets - Assessing current system architecture and recommending solutions for improvement - Building automation using Python modules to support product development and data analytics initiatives - Ensuring maximum uptime of the platform by utilizing cloud technologies such as Kubernetes, Terraform, Docker, etc. - Resolving technical issues promptly and providing guidance to development teams - Researching current and emerging technologies and proposing necessary changes - Assessing the business impact of technical decisions and participating in collaborative environments to foster new ideas - Maintaining comprehensive documentation on processes and decision-making Your Qualifications: - Experience with DevOps/Automation tools to minimize operational overhead - Ability to contribute to self-organizing teams within the Agile/Scrum project methodology - Bachelor's Degree in Computer Science or related field - 3+ years of related IT experience - 1+ years of experience building complex ETL pipelines with dependency management - 2+ years of experience in Big Data technologies such as Spark, Hive, Hadoop, etc. - Industry-recognized certifications - Strong familiarity with PaaS services, containers, and orchestrations - Excellent verbal and written communication skills What's in it for you - Be part of a global organization focused on transformation and innovation - A supportive environment where you can voice your opinions and be your authentic self - Global connectivity to learn from teammates across 52 countries - Embrace diversity, inclusion, and differences within a winning team - Competitive Total Reward offerings to support your career, family, wellness, and retirement Iron Mountain is a global leader in storage and information management services, trusted by organizations worldwide. We safeguard critical business information, sensitive data, and cultural artifacts. Our services help lower costs, mitigate risks, comply with regulations, and enable digital solutions. If you require accommodations due to a disability, please reach out to us. Category: Information Technology,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

You will be working in a hybrid mode at multiple locations including Bangalore, Chennai, Gurgaon, Pune, and Kolkata. With at least 6 years of experience in IT, you must possess a Bachelor's and/or master's degree in computer science or equivalent field. Your expertise should lie in Snowflake security, Snowflake SQL, and the design and implementation of various Snowflake objects. Practical experience with Snowflake utilities such as SnowSQL, Snowpipe, Snowsight, and Snowflake connectors is essential. You should have a deep understanding of Star and Snowflake dimensional modeling and a strong knowledge of Data Management principles. Additionally, familiarity with the Databricks Data & AI platform and Databricks Delta Lake Architecture is required. Hands-on experience in SQL and Spark (PySpark), as well as building ETL/data warehouse transformation processes, will be a significant part of your role. Strong verbal and written communication skills are essential, along with analytical and problem-solving abilities. Attention to detail is paramount in your work. The mandatory skills for this position include proficiency in (Snowflake + ADF + SQL) OR (Snowflake+ SQL).,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Software Engineer at Dun & Bradstreet, you will play a crucial role in developing and maintaining systems that support our core services across legacy Datacenters, AWS, and GCP. Your responsibilities will include software development for our Big Data Platform, ensuring the creation of Unit Tests for your code, and actively participating in daily Pull Request Reviews. The ideal candidate for this role will exhibit a passion for Developing and a curious nature towards Big Data Platforms, coupled with a strong development and problem-solving mindset. Collaboration with Development, SRE, and DevOps teams will be essential to translate business requirements and functional specifications into innovative solutions, implementing performant, scalable program designs, code modules, and stable systems. Your Key Responsibilities will include: - Developing scalable, distributed software systems and engaging in projects that demand research, interactivity, and the ability to pose pertinent questions. - Designing, developing, debugging, supporting, maintaining, and testing software applications. - Working closely with diverse teams to contribute to a wide range of applications and solutions. - Assisting Subject Matter Experts in offering consultation to ensure that new and existing software solutions adhere to industry best practices, strategies, and architectures, while actively pursuing professional growth. - Enhancing code quality through activities like writing unit tests, automation, and conducting code reviews. - Identifying opportunities for continuous improvement in technology solutions, be it optimizing system performance or addressing technical debt. Key Requirements for this role include: - 6+ years of experience in developing commercial software within an agile SDLC environment. - Proficiency in triaging data and performance issues, possessing strong analytical and problem-solving skills to explore, analyze, and interpret large datasets. - Familiarity with cloud-based technologies such as AWS/GCP, with certifications being a plus. - Expertise in distributed processing systems like Spark/Hadoop, programming skills in Python and Scala, experience with ETL tools, and data pipeline orchestration Frameworks such as Airflow. - Strong understanding of SQL and NoSQL databases, along with familiarity with data warehousing solutions like Big Query. - Demonstrating an ownership mindset, problem-solving skills, curiosity, and proactive behavior to drive success through collaboration and connection with team members. Fluency in English and relevant working market languages is advantageous where applicable. This position is internally titled as Senior Software Engineer at Dun & Bradstreet. For more information on Dun & Bradstreet job postings, please visit: - https://www.dnb.com/about-us/careers-and-people/joblistings.html - https://jobs.lever.co/dnb Kindly note that all official communication from Dun & Bradstreet will originate from an email address ending in @dnb.com. Please be aware that this job posting page is hosted and powered by Lever, and your usage of this page is subject to Lever's Privacy Notice and Cookie Policy, which govern the processing of visitor data on this platform.,

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

We are seeking a talented Knowledge Neo4J Developer to join our team as a key member of the data engineering team. Your primary responsibility will be the design, implementation, and optimization of graph databases, aimed at efficiently storing and retrieving high-dimensional data. You will have the opportunity to work with cutting-edge technologies in locations such as Hyderabad, Pune, Gurugram, and Bangalore. The major skills required for this role include expertise in Neo4j, Cypher, Python, and Bigdata tools such as Hadoop, Hive, and Spark. As a Knowledge Neo4J Developer, your responsibilities will include designing, building, and enhancing the client's online platform. You will leverage Neo4j to create and manage knowledge graphs, ensuring optimal performance and scalability. Additionally, you will research, propose, and implement new technology solutions following best practices and standards. Your role will involve developing and maintaining knowledge graphs using Neo4j, integrating graph databases with existing infrastructure, and providing support for query optimization and data modeling. To excel in this position, you should have a minimum of 5-10 years of experience in data engineering, proficiency in query languages like Cypher or Gremlin, and a strong foundation in graph theory. Experience with Bigdata tools is essential, along with excellent written and verbal communication skills, superior analytical and problem-solving abilities, and a preference for working in dual shore engagement setups. If you are interested in this exciting opportunity, please share your updated resume with us at francis@lorventech.com.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As an ETL Testing & Big Data professional, you will be responsible for designing and implementing ETL test strategies based on business requirements. Your role involves reviewing and analyzing ETL source code, as well as developing and executing test plans and test cases for ETL processes. Data validation and reconciliation using SQL queries will be a key aspect of your responsibilities. Monitoring ETL jobs, resolving issues affecting data accuracy, and performing performance testing on ETL processes to focus on optimization are crucial tasks in this role. Ensuring data quality and integrity across various data sources, along with coordinating with development teams to troubleshoot issues and suggest improvements, are essential for success. You will be expected to utilize automation tools to enhance the efficiency of testing processes and conduct regression testing after ETL releases or updates. Documenting test results, issues, and proposals for resolution, as well as providing support to business users regarding data-related queries, are integral parts of your responsibilities. Staying updated with the latest trends in ETL testing and big data technologies, working closely with data architects to ensure effective data modeling, and participating in technical discussions to contribute to knowledge sharing are key aspects of this role. Qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - 3+ years of experience in ETL testing and big data environments. - Strong proficiency in SQL and data modeling techniques. - Hands-on experience with Hadoop ecosystem and related tools. - Familiarity with ETL tools such as Informatica, Talend, or similar. - Experience with data quality frameworks and methodologies. - Knowledge of big data technologies like Spark, Hive, or Pig. - Excellent analytical and problem-solving skills. - Proficient communication skills for effective collaboration. - Ability to manage multiple tasks and meet deadlines efficiently. - Experience in Java or scripting languages is a plus. - Strong attention to detail and a commitment to delivering quality work. - Certifications in data management or testing are a plus. - Ability to work independently and as part of a team. - Willingness to adapt to evolving technologies and methodologies. Skills required: - Scripting languages - Data modeling - Data quality frameworks - Hive - Talend - Analytical skills - SQL - Performance testing - Automation tools - Pig - Hadoop ecosystem - ETL testing - Informatica - Hadoop - Data quality - Big data - Java - Regression testing - Spark,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data Engineering Senior Specialist (Databricks) at Nasdaq Bangalore, you will be joining the Bangalore technology center in India, where innovation and effectiveness are the driving forces. Nasdaq is at the forefront of revolutionizing markets and constantly evolving by adopting new technologies to create innovative solutions, aiming to shape the future. In this role, your primary responsibility will be to analyze defined business requirements, providing analytical insights, modeling, dimensional modeling, and testing to design solutions that meet customer needs effectively. You will focus on understanding business data needs and translating them into adaptable, extensible, and sustainable data structures. As a Databricks Data Engineer, your role will involve designing, building, and maintaining data pipelines within the Databricks Lakehouse Platform. Your expertise will be crucial in enabling efficient data processing, analysis, and reporting for data-driven initiatives. You will utilize the Databricks Lakehouse Platform for data engineering tasks, implement ETL tasks using Apache Spark SQL and Python, and develop ETL pipelines following the Medallion Architecture. Moreover, you will be responsible for adding new sources to the Lakehouse platform, reviewing technology platforms on AWS cloud, supervising data extraction methods, resolving technical issues, and ensuring project delivery within the assigned timeline and budget. You will also lead administrative tasks, ensuring completeness and accuracy in administration processes. To excel in this role, you are expected to have 8-10 years of overall experience with at least 5-6 years of specific Data Engineering experience on Databricks. Proficiency in SQL and Python for data manipulation, knowledge of modern data technologies, cloud computing platforms like AWS, data modeling, architecture, best practices, and familiarity with AI/ML Ops in Databricks are essential. A Bachelor's/Master's degree in a relevant field or equivalent qualification is required. It would be advantageous if you have knowledge of Terraform and hold certifications in relevant fields. Nasdaq offers a vibrant and entrepreneurial work environment where taking initiative, challenging the status quo, and embracing intelligent risks are encouraged. The company values diversity, inclusivity, and work-life balance in a hybrid-first environment. As an employee, you can benefit from various perks such as an annual monetary bonus, becoming a Nasdaq shareholder, health insurance, flexible working schedules, internal mentorship programs, and a wide selection of online learning resources. If you believe you possess the required skills and experience for this role, we encourage you to submit your application in English as soon as possible. The selection process is ongoing, and we aim to get back to you within 2-3 weeks. At Nasdaq, we are committed to providing reasonable accommodations to individuals with disabilities throughout the job application and interview process, ensuring equal access to employment opportunities. If you require any accommodations, please reach out to us to discuss your needs.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Data Engineer, your primary responsibility will be to build and manage systems that collect, store, clean, and deliver data for various teams within the company, such as analysts, data scientists, and business users. Working with large datasets, you will focus on improving data quality, optimizing performance, and ensuring others can access the necessary data reliably and securely. Your key responsibilities will include building and maintaining data pipelines to pull data from diverse sources, processing it, and storing it for easy access. You will assemble large, complex datasets that align with both business and technical requirements, automate and optimize processes to enhance efficiency, and create tools to assist data teams in uncovering valuable insights like customer behavior and performance metrics. Collaboration with different teams, including product, design, and executive teams, will be essential to address data-related challenges, while ensuring the security and protection of sensitive data. Supporting data scientists by developing tools and systems to streamline their work processes is also part of your role, along with continuously seeking improvements in how data is utilized within the business. To excel in this role, you must possess expertise in data architecture and modeling to design scalable systems for easy analysis, proficiency in building and managing data pipelines, and ensuring data cleanliness for usability. Strong collaboration skills, the ability to document processes clearly, and basic knowledge of machine learning concepts are crucial. Proficiency in technical tools such as Python for writing clean, reusable code, Spark (PySpark and SparkSQL), SQL for data querying and manipulation, as well as familiarity with Git, GitHub, and JIRA for version control and task tracking is essential. Effective communication skills are necessary to convey data concepts to non-technical individuals. Additionally, having experience with Azure Cloud tools like Databricks or Synapse Analytics, familiarity with Azure Data Management tools such as Azure SQL Data Warehouse or Cosmos DB, and hands-on experience in deploying machine learning models in real-world applications are considered advantageous for this role.,

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Role Overview: Capital Consultants are responsible for managing deal flow for transactions under ₹30 crore annual revenue, with selective deals ranging from ₹30 crore to ₹80 crore annual revenue. They act as intermediaries between companies and lenders, ensuring a smooth and efficient funding process by managing deal details, lender communications, and documentation. The CapC role emphasizes client relationship management, deal curation, deal structuring, ensuring process and product hygiene, and portfolio monitoring Key Responsibilities: Deal Requirement Assessment: - Understand customers requirements, including funding quantum, structure, pricing, security, and use case. - Collaborate with customers to gather relevant information and align deal structure to lender preferences. Engagement and Documentation: - Sign engagement letters (ELs) as per the deal - Vet and validate one-pagers prepared by Financial Analysts on Product to ensure accuracy and completeness. Data and Deal Management: - Collect necessary deal data and follow up on missing information as required. - Allocate deals into appropriate funnels (Spark, Swift, Scale) based on size and complexity. Lender Coordination and Communication on Deals: - Pitch deals to activated lenders from product and DCM team and actively follow up to ensure deal closure with lenders - Liaise with lenders for all documentation requirements, escalate to the Pod Owner or DCM if needed. Portfolio Management and Repayments: - Focus on collection activities, particularly on Early Warning Signals (EWS) to ensure low Days Past Due (DPD). - Maintain strong relationships with portfolio companies to capitalize on re-deployment opportunities and enhance customer experience Platform and Process Management: - Maintain accurate deal information in HubSpot to ensure deal hygiene. - Resolve support-related queries in collaboration. - Select appropriate lenders (with support from the product team) and route deals effectively. - Training & adoption of platform Invoicing and Realisation: - Oversee invoicing and realisation processes in partnership with Finance / Ops, as an escalation point. Sharing your resume at bhumika.bisht@recur.club

Posted 1 week ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies