Home
Jobs

7796 Spark Jobs - Page 17

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Greetings from TCS! TCS is hiring for Big Data (PySpark & Scala) Location: - Chennai/Pune/Mumbai Desired Experience Range: 5 + Years Must-Have • PySpark • Hive Good-to-Have • Spark • HBase • DQ tool • Agile Scrum experience • Exposure in data ingestion from disparate sources onto Big Data platform Thanks Anshika Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

1. Job Title: Senior Azure Engineer (Azure Platform Operations & Automation) Experience: 5–7 years Location: Onsite/Remote (Noida) Reports To: Technical Manager / Architect Budget: Max. 12 LPA Responsibilities: · Manage and troubleshoot ADF and Databricks workflows, ensuring triggers, linked services, parameters, and pipelines function correctly end-to-end. · Investigate and resolve complex job failures; debug Spark jobs, and analyze notebook execution graphs and logs. · Lead performance optimization for ADF pipelines, partitioning strategies, and ADLS data formats (e.g., Parquet tuning). · Execute and automate data pipeline deployment using Azure DevOps, ARM templates, PowerShell scripts, and Git repositories. · Govern data lifecycle rules, partition retention, and enforce consistency across raw/curated zones in ADLS. · Monitor resource consumption (clusters, storage, pipelines) and advise on cost-saving measures (auto-scaling, tiering, concurrency). · Prepare RCA for P1/P2 incidents and support change deployment validation, rollback strategy, and UAT coordination. · Review Power BI refresh bottlenecks, support L1 Power BI developer with dataset tuning and refresh scheduling improvements. · Validate SOPs and support documentation prepared by L1s, and drive process improvement via automation or standardization. Required Skills · Expert in Azure Data Factory, Databricks (PySpark), Azure Data Lake Storage, Synapse. · Proficient in Python, PySpark, SQL/SparkSQL, and JSON configurations. · Familiar with Azure DevOps, Git for version control, and CI/CD automation. · Hands-on with monitoring (Azure Monitor), diagnostics, and cost governance. · Strong understanding of data security practices, IAM, RBAC, and audit trail enforcement Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Data Architect Location: Noida, India Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

About Lingaro: Lingaro Group is the end-to-end data services partner to global brands and enterprises. We lead our clients through their data journey, from strategy through development to operations and adoption, helping them to realize the full value of their data. Since 2008, Lingaro has been recognized by clients and global research and advisory firms for innovation, technology excellence, and the consistent delivery of highest-quality data services. Our commitment to data excellence has created an environment that attracts the brightest global data talent to our team. Duties: Designing and implementing data processing systems using distributed frameworks like Hadoop, Spark, Snowflake, Airflow, or other similar technologies. This involves writing efficient and scalable code to process, transform, and clean large volumes of structured and unstructured data. Building data pipelines to ingest data from various sources such as databases, APIs, or streaming platforms. Integrating and transforming data to ensure its compatibility with the target data model or format. Designing and optimizing data storage architectures, including data lakes, data warehouses, or distributed file systems. Implementing techniques like partitioning, compression, or indexing to optimize data storage and retrieval. Identifying and resolving bottlenecks, tuning queries, and implementing caching strategies to enhance data retrieval speed and overall system efficiency. Designing and implementing data models that support efficient data storage, retrieval, and analysis. Collaborating with data scientists and analysts to understand their requirements and provide them with well-structured and optimized data for analysis and modeling purposes. Utilizing frameworks like Hadoop or Spark to perform distributed computing tasks, such as parallel processing, distributed data processing, or machine learning algorithms Implementing security measures to protect sensitive data and ensuring compliance with data privacy regulations. Establishing data governance practices to maintain data integrity, quality, and consistency. Identifying and resolving issues related to data processing, storage, or infrastructure. Monitoring system performance, identifying anomalies, and conducting root cause analysis to ensure smooth and uninterrupted data operations. Collaborating with cross-functional teams including data scientists, analysts, and business stakeholders to understand their requirements and provide technical solutions. Communicating complex technical concepts to non-technical stakeholders in a clear and concise manner. Independence and responsibility for delivering a solution Ability to work under Agile and Scrum development methodologies Staying updated with emerging technologies, tools, and techniques in the field of big data engineering. Exploring and recommending new technologies to enhance data processing, storage, and analysis capabilities. Train and mentor junior data engineers, providing guidance and knowledge transfer. Requirements: A bachelor's or master's degree in Computer Science, Information Systems, or a related field is typically required. A bachelor's or master's degree in Computer Science, Information Systems, or a related field is typically required. Additional certifications in cloud are advantageous. Minimum of 10+ years of experience in data engineering or a related field. Strong technical skills in data engineering, including proficiency in programming languages such as Python, SQL, Pyspark. Familiarity with Azure cloud platform viz. Azure Databricks, Data Factory, Data Lake etc., and experience in implementing data solutions in a cloud environment. Expertise in working with various data tools and technologies, such as ETL frameworks, data pipelines, and data warehousing solutions. In-depth knowledge of data management principles and best practices, including data governance, data quality, and data integration. Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues. Knowledge of data security and privacy regulations, and the ability to ensure compliance within data engineering projects. Excellent communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams, stakeholders, and senior management. Continuous learning mindset, staying updated with the latest advancements and trends in data engineering and related technologies. Consulting exposure, with external customer focus mindset is preferred. Why join us: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment. Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Engineer (PySpark | GCP | DataProc) Location: Remote (Work from Anywhere – India Preferred) Experience: 5–8 Years Apply at: 📧 nikhil.kumar@krtrimaiq.ai About the Role We at KrtrimaIQ Cognitive Solutions are looking for a highly experienced and results-driven Senior Data Engineer to design and develop scalable, high-performance data pipelines and solutions in a cloud-native, big data environment. This is a fully remote role, ideal for professionals with deep hands-on experience in PySpark, Google Cloud Platform (GCP), and DataProc. Key Responsibilities:Design, build, and maintain scalable ETL/ELT data pipelines using PySpark Develop and optimize data workflows leveraging GCP DataProc, BigQuery, Cloud Storage, and Cloud Composer Ingest, transform, and integrate structured and unstructured data from diverse sources Collaborate with Data Scientists, Analysts, and cross-functional teams to deliver reliable, real-time data solutions Ensure performance, scalability, and reliability of data platforms Implement best practices for data governance, security, and quality Must-Have Skills:Strong hands-on experience in PySpark and the Apache Spark ecosystem Proficiency in working with GCP services, especially DataProc, BigQuery, Cloud Storage, and Cloud Composer Experience with distributed data processing, ETL design, and data warehouse architecture Strong SQL skills and familiarity with NoSQL data stores Knowledge of CI/CD pipelines, version control (Git), and code review processes Ability to work independently in a remote setup with strong communication skills Preferred Skills:Exposure to real-time data processing tools like Kafka or Pub/Sub Familiarity with Airflow, Terraform, or other orchestration/automation tools Experience with data quality frameworks and observability tools Why Join Us?100% Remote – Work from anywhere High-impact role in a fast-growing AI-driven company Opportunity to work on enterprise-grade, large-scale data systems Collaborative and flexible work culture 📩 Interested candidates, please send your resume to: nikhil.kumar@krtrimaiq.ai #SeniorDataEngineer #RemoteJobs #PySpark #GCPJobs #DataProc #BigQuery #CloudDataEngineer #DataEngineeringJobs #ETLPipelines #ApacheSpark #BigDataJobs #GoogleCloudJobs #CloudDataEngineer #HiringNow #DataPipelineEngineer #WorkFromHome #KrtrimaIQ #AIDataEngineering #DataJobsIndia Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Job description Basic Responsibilities (Must-Haves): 5+ years of experience in dashboard story development, dashboard creation, and data engineering pipelines . Hands-on experience with log analytics, user engagement metrics, and product performance metrics . Ability to identify patterns, trends, and anomalies in log data to generate actionable insights for product enhancements and feature optimization . Collaborate with cross-functional teams to gather business requirements and translate them into functional and technical specifications. Manage and organize large volumes of application log data using Google Big Query . Design and develop interactive dashboards to visualize key metrics and insights using any of the tool like Tableau Power BI , or ThoughtSpot AI . Create intuitive, impactful visualizations to communicate findings to teams including customer success and leadership. Ensure data integrity, consistency, and accessibility for analytical purposes. Analyse application logs to extract metrics and statistics related to product performance, customer behaviour, and user sentiment . Work closely with product teams to understand log data generated by Python-based applications . Collaborate with stakeholders to define key performance indicators (KPIs) and success metrics. Can optimize data pipelines and storage in Big Query . Strong communication and teamwork skills . Ability to learn quickly and adapt to new technologies. Excellent problem-solving skills . Preferred Responsibilities (Nice-to-Haves): Knowledge of Generative AI (GenAI) and LLM-based solutions . Experience in designing and developing dashboards using ThoughtSpot AI . Good exposure to Google Cloud Platform (GCP) . Data engineering experience with modern data warehouse architectures . Additional Responsibilities: Participate in the development of proof-of-concepts (POCs) and pilot projects. Ability to articulate ideas and points of view clearly to the team. Take ownership of data analytics and data engineering solutions . Additional Nice-to-Haves: Experience working with large datasets and distributed data processing tools such as Apache Spark or Hadoop . Familiarity with Agile development methodologies and version control systems like Git . Familiarity with ETL tools such as Informatica or Azure Data Factory Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

India

Remote

Linkedin logo

We are seeking a skilled Azure Data Engineer with hands-on experience in modern data engineering tools and platforms within the Azure ecosystem . The ideal candidate will have a strong foundation in data integration, transformation, and migration , along with a passion for working on complex data migration projects . Job Title: Azure Data Engineer Location: Remote Work Timings: 2:00 PM – 11:00 PM IST Please Note: This is a pure Azure-specific role . If your expertise is primarily in AWS or GCP , we kindly request that you do not apply . Key Responsibilities: Design, develop, and maintain data pipelines using Azure Data Factory / Synapse Data Factory to orchestrate and automate data workflows. Build and manage data lakes using Azure Data Lake , enabling secure and scalable storage for structured and unstructured data. Lead and support data migration initiatives (on-prem to cloud, cloud-to-cloud), ensuring minimal disruption and high integrity of data. Perform advanced data transformations using Python , PySpark , and Azure Databricks or Synapse Spark Pools . Develop and optimize SQL / T-SQL queries for data extraction, manipulation, and reporting across Azure SQL services. Design and maintain ETL solutions using SSIS , where applicable. Collaborate with cross-functional teams to understand requirements and deliver data-driven solutions. Monitor, troubleshoot, and continuously improve data workflows to ensure performance, reliability, and scalability. Uphold best practices in data governance, security, and compliance. Required Skills and Qualifications: 2+ years of experience as a Data Engineer, with strong emphasis on Azure technologies. Proven expertise in: Azure Data Factory / Synapse Data Factory Azure Data Lake Azure Databricks / Synapse Spark Python and PySpark SQL / T-SQL SSIS Demonstrated experience in data migration projects and eagerness to take on new migration challenges. Microsoft Certified: Azure Data Engineer Associate certification preferred. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. If you believe you are qualified and are looking forward to setting your career on a fast-track, apply by submitting a few paragraphs explaining why you believe you are the right person for this role. To know more about Techolution, visit our website: www.techolution.com If you believe you are qualified and are looking forward to setting your career on a fast-track, apply by submitting a few paragraphs explaining why you believe you are the right person for this role.To know more about Techolution, visit our website: www.techolution.com About Techolution: Techolution is a next gen AI consulting firm on track to become one of the most admired brands in the world for "AI done right". Our purpose is to harness our expertise in novel technologies to deliver more profits for our enterprise clients while helping them deliver a better human experience for the communities they serve. At Techolution, we build custom AI solutions that produce revolutionary outcomes for enterprises worldwide. Specializing in "AI Done Right," we leverage our expertise and proprietary IP to transform operations and help achieve business goals efficiently. We are honored to have recently received the prestigious Inc 500 Best In Business award , a testament to our commitment to excellence. We were also awarded - AI Solution Provider of the Year by The AI Summit 2023, Platinum sponsor at Advantage DoD 2024 Symposium and a lot more exciting stuff! While we are big enough to be trusted by some of the greatest brands in the world, we are small enough to care about delivering meaningful ROI-generating innovation at a guaranteed price for each client that we serve. Our thought leader, Luv Tulsidas, wrote and published a book in collaboration with Forbes, “Failing Fast? Secrets to succeed fast with AI”. Refer here for more details on the content - https://www.luvtulsidas.com/ Let's explore further! Uncover our unique AI accelerators with us: 1. Enterprise LLM Studio : Our no-code DIY AI studio for enterprises. Choose an LLM, connect it to your data, and create an expert-level agent in 20 minutes. 2. AppMod. AI : Modernizes ancient tech stacks quickly, achieving over 80% autonomy for major brands! 3. ComputerVision. AI : Our ComputerVision. AI Offers customizable Computer Vision and Audio AI models, plus DIY tools and a Real-Time Co-Pilot for human-AI collaboration! 4. Robotics and Edge Device Fabrication : Provides comprehensive robotics, hardware fabrication, and AI-integrated edge design services. 5. RLEF AI Platform : Our proven Reinforcement Learning with Expert Feedback (RLEF) approach bridges Lab-Grade AI to Real-World AI. Some videos you wanna watch! Computer Vision demo at The AI Summit New York 2023 Life at Techolution GoogleNext 2023 Ai4 - Artificial Intelligence Conferences 2023 WaWa - Solving Food Wastage Saving lives - Brooklyn Hospital Innovation Done Right on Google Cloud Techolution featured on Worldwide Business with KathyIreland Techolution presented by ION World’s Greatest Visit us @ www.techolution.com : To know more about our revolutionary core practices and getting to know in detail about how we enrich the human experience with technology. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

We’re looking for a Technical Delivery Manager with a strong mix of data engineering expertise and delivery planning skills . What You’ll Do: Own backlog grooming, requirement translation, and delivery planning. Triage data platform tickets and work with L2/L3 support teams. Lead agile delivery teams and ensure technical alignment. Collaborate closely with stakeholders for smooth delivery. Must-Haves: 5+ years in technical product/project management within data platforms. Experience handling agile teams and managing end-to-end data initiatives. Hands-on with Spark, Delta Lake, Databricks , and one cloud platform ( Azure, AWS, or GCP ). Strong communication and planning skills. Proficient in Jira, Confluence. 📌 Bachelor’s degree in Computer Science or equivalent. 👥 Prior experience in team handling is essential. Be part of shaping the future of data. Let’s connect! Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Title: Political Analyst - Manager Location: Trivandrum (For projects across India) About Varahe Analytics: Varahe Analytics is one of India’s premier integrated political consulting firms, specializing in building data-driven 360-degree election management. We help our clients with strategic advice and implementation, combining data-backed insights and in-depth ground intelligence into a holistic electoral campaign. We are passionate about our democracy and the politics that shape our world. We draw on some of the sharpest minds from distinguished institutions and diverse professional backgrounds to help us achieve our goal of building electoral strategies that spark conversations, effect change, and help shape electoral and legislative ecosystems in our country. About the Team & Role: We are seeking a Manager to join our Political Analysis Team for a project based in Trivandrum . This role requires deep regional knowledge, fluency in Malayalam , and a strong grasp of political dynamics in Kerala. As a manager, you will lead a team responsible for tracking political developments, conducting research, managing field operations, and delivering strategic insights to stakeholders. We are looking for a Manager – Political Analysis to lead our analysis efforts for a project based in Trivandrum. The ideal candidate will bring a nuanced understanding of Kerala’s political environment, fluency in Malayalam, and a track record of managing research and field teams. This role is critical in transforming field intelligence into strategic inputs for decision-makers. What Would This Role Entail? Track and analyze key political, electoral, and socio-economic developments in Kerala Manage a team of analysts and field coordinators to conduct primary research (e.g., interviews, surveys, focus groups) Supervise preparation of constituency- and region-level reports with actionable insights Collaborate with data and strategy teams to align field intelligence with campaign goals Interface with senior stakeholders and client-side representatives, ensuring protocol adherence Maintain timelines, reporting quality, and confidentiality in all deliverables Drive field operations and ensure real-time intelligence reporting Necessary Skills: Bachelor’s degree or higher (Social Sciences, Political Science, Economics, or related fields preferred) Fluency in Malayalam and English (spoken and written) Strong understanding of Kerala’s political landscape Excellent research, analytical, and communication skills Leadership experience managing cross-functional or field teams High attention to detail, ability to multitask, and deliver under tight deadlines Proficient in Microsoft Office and Google Suite tools Willingness to travel extensively as required by the project Good to Have Skills: Experience in political consulting, grassroots organizing, journalism, or electoral field research. Familiarity with electoral campaign structures and political party functioning. Comfort working in fast-paced, high-stakes environments. Prior experience in political consulting, field research, journalism, or grassroots organizing. Familiarity with the functioning of political parties and electoral campaigns Experience working in high-pressure, fast-paced environments How to Apply If you're a professional looking for a high-impact challenge, interested in joining a team of like-minded and motivated individuals who think strategically, act decisively, and get things done, drop in an email at openings@varaheanalytics.com Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we’re only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieve? Read on. At UKG, you get more than just a job. You get to work with purpose. Our team of U Krewers are on a mission to inspire every organization to become a great place to work through our award-winning HR technology built for all. Here, we know that you’re more than your work. That’s why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose — a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you’re passionate about our purpose — people —then we can’t wait to support whatever gives you purpose. We’re united by purpose, inspired by you. Job Description: Designing software products using modeling techniques and software design patterns. Develop cloud native SAAS products /applications. Work with state-of-the-art Cloud technologies. Designing and developing web-based business applications. Participate in design and coding of the application software Use and contribute to the Continuous Integration and Continuous Delivery process (CI/CD). Mentors the team on technology concepts and ensures team compliance to best practices for design Mentors the team on the best techniques to debug and troubleshoot issues Interpret informal requirements descriptions and detail them for technical teams consumption Participate in code and design reviews to ensure quality and conformance to product standards. Academic Qualifications: Graduate/ Postgraduate in Computer Science with at least 60% throughout academics Professional Experience 2 - 5yrs' IT experience and more than 2 yrs of relevant experience Experience to build cloud native SAAS/cloud applications. Strong hands-on experience with Java, XML Exp. in Object-oriented analysis, design and programming, database modeling, etc. Experience using one or more ORM frameworks such as Hibernate/JPA Experience using one or more application frameworks such as Spring Experience in in Javascript, AJAX and other Java presentation technologies, SOA and Web Services is an added advantage Core product development experience on SaaS/Cloud/Multitenant based projects is good to have. Good experience with Unit Testing processes and tools (JUnit, Mockito, PowerMock, etc.) Good experience of Continuous Integration and Continuous Delivery process (CI/CD). Hands-on experience with Design Patterns Exposure to database techniques/tools such as data modeling, Oracle, SQL, etc Experience in Data Analytics using Cassandra and Spark is an added advantage Experience using one or more application containers such as JBoss or Tomcat Conversant with platforms, tools and frameworks used in application development Exposure to Agile/SCRUM methodology and TDD (Test Driven Development) Excellent Debugging / Troubleshooting skills Good Communication skills. Where we’re going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it’s our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! UKG is proud to be an equal opportunity employer and is committed to promoting diversity and inclusion in the workplace, including the recruitment process. Disability Accommodation in the Application and Interview Process For individuals with disabilities that need additional assistance at any point in the application and interview process, please email UKGCareers@ukg.com Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Data Scientist Experience: 6-10 years Location: Noida Contract duration: 6 months + extendable Responsibility : Model Development: Design and implement ML models to tackle complex business challenges. Data Preprocessing: Clean, preprocess, and analyze large datasets for meaningful insights and model features. Model Training: Train and fine-tune ML models using various techniques, including deep learning and ensemble methods. Evaluation and Optimization: Assess model performance, optimize for accuracy, efficiency, and scalability. Deployment: Deploy ML models in production, monitor performance for reliability. Collaboration: Work with data scientists, engineers, and stakeholders to integrate ML solutions. Research: Stay updated on ML/AI advancements, and contribute to internal knowledge. Documentation: Maintain comprehensive documentation for all ML models and processes. • Qualification - Bachelor's or master’s in computer science, Machine Learning, Data Science, or a related field, and must be experience of 6-10 years. • Desirable Skills: Must Have 1. Experience in time series forecasting, regression Model, Classification Model 2. Python, R, Data analysis 3. Large-scale data handling with Pandas, Numpy, and Matplotlib 4. Version Control: Git or any other 5. ML Framework: Hands-on exp in Tensorflow, Pytorch, Scikit-Learn, Keras 6. Good knowledge on Cloud platform and ( AWS/Azure/ GCP), Docker, Kubernetes 7. Model Selection, evaluation, Deployment, Data collection, and preprocessing, Feature engineering Estimation Good to Have Experience with Big Data and analytics using technologies like Hadoop, Spark, etc. Additional experience or knowledge in AI/ML technologies beyond the mentioned frameworks. BFSI and banking domain Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Linkedin logo

Company Description Wiser Solutions is a suite of in-store and eCommerce intelligence and execution tools. We're on a mission to enable brands, retailers, and retail channel partners to gather intelligence and automate actions to optimize in-store and online pricing, marketing, and operations initiatives. Our Commerce Execution Suite is available globally. Job Description When looking to buy a product, whether it is in a brick and mortar store or online, it can be hard enough to find one that not only has the characteristics you are looking for but is also at a price that you are willing to pay. It can also be especially frustrating when you finally find one, but it is out of stock. Likewise, brands and retailers can have a difficult time getting the visibility they need to ensure you have the most seamless experience as possible in selecting their product. We at Wiser believe that shoppers should have this seamless experience, and we want to do that by providing the brands and retailers the visibility they need to make that belief a reality. Our goal is to solve a messy problem elegantly and cost effectively. Our job is to collect, categorize, and analyze lots of structured and semi-structured data from lots of different places every day (whether it’s 20 million+ products from 500+ websites or data collected from over 300,000 brick and mortar stores across the country). We help our customers be more competitive by discovering interesting patterns in this data they can use to their advantage, while being uniquely positioned to be able to do this across both online and instore. We are looking for a lead-level software engineer to lead the charge on a team of like-minded individuals responsible for developing the data architecture that powers our data collection process and analytics platform. If you have a passion for optimization, scaling, and integration challenges, this may be the role for you. What You Will Do Think like our customers – you will work with product and engineering leaders to define data solutions that support customers’ business practices. Design/develop/extend our data pipeline services and architecture to implement your solutions – you will be collaborating on some of the most important and complex parts of our system that form the foundation for the business value our organization provides Foster team growth – provide mentorship to both junior team members and evangelizing expertise to those on others. Improve the quality of our solutions – help to build enduring trust within our organization and amongst our customers by ensuring high quality standards of the data we manage Own your work – you will take responsibility to shepherd your projects from idea through delivery into production Bring new ideas to the table – some of our best innovations originate within the team Technologies We Use Languages: SQL, Python Infrastructure: AWS, Docker, Kubernetes, Apache Airflow, Apache Spark, Apache Kafka, Terraform Databases: Snowflake, Trino/Starburst, Redshift, MongoDB, Postgres, MySQL Others: Tableau (as a business intelligence solution) Qualifications Bachelors/Master’s degree in Computer Science or relevant technical degree 10+ years of professional software engineering experience Strong proficiency with data languages such as Python and SQL Strong proficiency working with data processing technologies such as Spark, Flink, and Airflow Strong proficiency working of RDMS/NoSQL/Big Data solutions (Postgres, MongoDB, Snowflake, etc.) Solid understanding of streaming solutions such as Kafka, Pulsar, Kinesis/Firehose, etc. Hands-on experience with Docker, Kubernetes, infrastructure as code using Terraform, and Kubernetes package management with Helm charts Solid understanding of ETL/ELT and OLTP/OLAP concepts Solid understanding of columnar/row-oriented data structures (e.g. Parquet, ORC, Avro, etc.) Solid understanding of Apache Iceberg, or other open table formats Proven ability to transform raw unstructured/semi-structured data into structured data in accordance to business requirements Solid understanding of AWS, Linux and infrastructure concepts Proven ability to diagnose and address data abnormalities in systems Proven ability to learn quickly, make pragmatic decisions, and adapt to changing business needs Experience building data warehouses using conformed dimensional models Experience building data lakes and/or leveraging data lake solutions (e.g. Trino, Dremio, Druid, etc.) Experience working with business intelligence solutions (e.g. Tableau, etc.) Experience working with ML/Agentic AI pipelines (e.g. , Langchain, LlamaIndex, etc.) Understands Domain Driven Design concepts and accompanying Microservice Architecture Passion for data, analytics, or machine learning. Focus on value: shipping software that matters to the company and the customer Bonus Points Experience working with vector databases Experience working within a retail or ecommerce environment. Proficiency in other programming languages such as Scala, Java, Golang, etc. Experience working with Apache Arrow and/or other in-memory columnar data technologies Supervisory Responsibility Provide mentorship to team members on adopted patterns and best practices. Organize and lead agile ceremonies such as daily stand-ups, planning, etc Additional Information EEO STATEMENT Wiser Solutions, Inc. is an Equal Opportunity Employer and prohibits Discrimination, Harassment, and Retaliation of any kind. Wiser Solutions, Inc. is committed to the principle of equal employment opportunity for all employees and applicants, providing a work environment free of discrimination, harassment, and retaliation. All employment decisions at Wiser Solutions, Inc. are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, sex, national origin, family or parental status, disability, genetics, age, sexual orientation, veteran status, or any other status protected by the state, federal, or local law. Wiser Solutions, Inc. will not tolerate discrimination, harassment, or retaliation based on any of these characteristics. Show more Show less

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title - Graphic Designer Job Location - Noida Office (for projects across India) Minimum : 3-5 years of experience Language proficiency in Tamil and Malayalam is mandatory Note- This is a short-term contractual role for a period of 6 Months to 1 year. About Varahe Analytics: Varahe Analytics is one of India’s premier integrated political consulting firms specializing in building data-driven 360-degree election campaigns. We help our clients with strategic advice and implementation, combining data-backed insights and in-depth ground intelligence into a holistic electoral campaign. We are passionate about our democracy and the politics that shape our world. We draw on some of the sharpest minds from distinguished institutions and diverse professional backgrounds to help us achieve our goal of building electoral strategies that spark conversations, effect change, and help shape electoral and legislative ecosystems in our country. About the Team: We are looking for a creative and talented Graphic Designer to join our team. The ideal candidate will have a strong eye for visual composition and a passion for creating stunning graphics. Key Responsibilities Create Visual Content: Design high-quality graphics, including logos, brochures, posters, social media visuals, advertisements, and other marketing materials. Maintain Brand Consistency: Ensure all designs adhere to company branding guidelines and maintain a consistent visual identity across all platforms. Stay Updated: Keep abreast of the latest design trends, tools, and best practices in graphic design to continuously improve design quality and efficiency. Revise Designs: Incorporate feedback and make necessary changes to designs, ensuring final deliverables meet the required specifications and standards. Qualifications Education: Graphic Design, Visual Communication, or a related field. Experience: Minimum of 5 years of experience in graphic design or a related field. Necessary Skills Proficiency in design software such as Adobe Creative Suite (Illustrator, Photoshop, InDesign) and other relevant tools. Strong understanding of typography, color theory, and layout design . Ability to create visually appealing and effective designs for print and digital media. Excellent communication and collaboration skills. Attention to detail and a strong eye for aesthetics. Knowledge of motion graphics and video editing software is a plus . Proficiency in Tamil, Malayalam, English, and Hindi language is a must. Interested professionals looking for a high-impact challenge, capable of working with a team of like-minded and motivated individuals who think strategically, act decisively and get things done, are requested to drop an email at openings@varaheanalytics.com Show more Show less

Posted 1 day ago

Apply

13.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Description Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description We are looking for a BI Architect with 13+ years of experience to lead the design and implementation of scalable BI and data architecture solutions. The role involves driving data modeling, cloud-based pipelines, migration projects, and data lake initiatives using technologies like AWS, Kafka, Spark, SQL, and Python. Experience with EDW modeling and architecture is a strong plus. Key Responsibilities ● Design and develop scalable BI and data models to support enterprise analytics. ● Lead data platform migration from legacy BI systems to modern cloud architectures. ● Architect and manage data lakes, batch and streaming pipelines, and real-time integrations via Kafka and APIs. ● Support data governance, quality, and access control initiatives. ● Partner with data engineers, analysts, and business stakeholders to deliver reliable, high-performing data solutions. ● Contribute to architecture decisions and platform scalability planning Qualifications ● Should have 10-15 years of relevant experience. ● 10+ years in BI, data engineering, or data architecture roles. ● Proficiency in SQL, Python, Apache Spark, and Kafka. ● Strong hands-on experience with AWS data services (e.g., S3, Redshift, Glue, EMR). ● Track record of leading data migration and modernization projects. ● Solid understanding of data governance, security, and scalable pipeline design. ● Excellent collaboration and communication skills. Good to Have ● Experience with enterprise data warehouse (EDW) modeling and architecture. ● Familiarity with BI tools like Power BI, Tableau, Looker, or Quicksight. ● Knowledge of lakehouse, data mesh, or modern data stack concepts. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less

Posted 1 day ago

Apply

1.0 years

0 Lacs

Mohali district, India

On-site

Linkedin logo

🚀 We’re Hiring – Immediate Openings! 🚀 📍 Location : Mohali 📅 Duration : 1-Year Contract 👥 Experience : 5–6 Years We're looking for talented professionals to join our team immediately for the following roles: 🔹 Microsoft Dynamics Engineer (D365) Strong hands-on experience with Microsoft Dynamics 365 (CE/CRM or F&O) Expertise in Power Platform, Power Automate, and integrations Proficient in .NET, JavaScript, and CRM customizations 🔹 Hadoop Engineer Solid experience with the Hadoop ecosystem (HDFS, Hive, Spark, etc.) Skilled in Java/Python/Scala and big data tools Experience in managing large-scale distributed data systems 💼 If you or someone you know fits the bill, feel free to connect or share your profile at neha.sehgal@prakharsoftwares.com 📢 Tag & Share – You might help someone land their next big opportunity! #Hiring #MicrosoftDynamics #Hadoop #MohaliJobs #TechJobs #ImmediateJoiners #D365 #BigData #ContractJobs Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

As a Data Scientist you’ll be part of a team that is working on cutting edge machine learning models. You will work side by side with Analysts and Machine Learning Engineers and take full ownership of your work – from the initial idea-generation phase to the implementation of the final product. Our ideal candidate is result-focused, innovative, has a solid quantitative background and a good business understanding. About The Role Working in a multi-disciplined team where you’ll take full ownership of turning discoveries and ideas into machine learning models. Prototyping diverse machine learning models, from credit scoring to recommendation engine. Researching & Development on how to improve our models, by using our vast amount and unique set of data. Actively contribute to taking Data Science at Kredivo to the next level. About You Minimum of 5 years of hands-on experience in risk modeling within the financial services industry or other similar industries and at least 2 years experience in managing a team. Strong domain expertise and solid understanding of financial products, risk metrics, and regulatory requirements. Understand risk modeling best practices and capable of delivering end-to-end, robust and impactful data products. Proven track record of delivering high-quality data products in a fast-paced, dynamic environment. Excellent problem-solving skills with a demonstrated ability to think analytically and strategically. Strong communication skills with the ability to convey complex concepts to both technical and non-technical audiences. Bonus Points (optional) For Masters, PhD or equivalent experience in a quantitative field (i.e, Mathematics, Statistics, Econometrics, Artificial Intelligence, Physics). Experience working with large-scale datasets and distributed computing frameworks (e.g., Spark). Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Azure Data Engineer with Databricks Experience: 5 – 10 years Job Level: Senior Engineer / Lead / Architect Notice Period: Immediate Joiner Role Overview Join our dynamic team at Team Geek Solutions, where we specialize in innovative data solutions and cutting-edge technology implementations to empower businesses across various sectors. We are looking for a skilled Azure Data Engineer with expertise in Databricks to join our high-performing data and AI team for a critical client engagement. The ideal candidate will have strong hands-on experience in building scalable data pipelines, data transformation, and real-time data processing using Azure Data Services and Databricks. Key Responsibilities Design, develop, and deploy end-to-end data pipelines using Azure Databricks, Azure Data Factory, and Azure Synapse Analytics. Perform data ingestion, data wrangling, and ETL/ELT processes from various structured and unstructured data sources (e.g., APIs, on-prem databases, flat files). Optimize and tune Spark-based jobs and Databricks notebooks for performance and scalability. Implement best practices for CI/CD, code versioning, and testing in a Databricks environment using DevOps pipelines. Design data lake and data warehouse solutions using Delta Lake and Synapse Analytics. Ensure data security, governance, and compliance using Azure-native tools (e.g., Azure Purview, Key Vault, RBAC). Collaborate with data scientists to enable feature engineering and model training within Databricks. Write efficient SQL and PySpark code for data transformation and analytics. Monitor and maintain existing data pipelines and troubleshoot issues in a production environment. Document technical solutions, architecture diagrams, and data lineage as part of delivery. Mandatory Skills & Technologies Azure Cloud Services: Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Data Lake Storage (Gen2), Azure Key Vault, Azure Functions, Azure Monitor Databricks Platform: Delta Lake, Databricks Notebooks, Job Clusters, MLFlow (optional), Unity Catalog Programming Languages: PySpark, SQL, Python Data Pipelines: ETL/ELT pipeline design and orchestration Version Control & DevOps: Git, Azure DevOps, CI/CD pipelines Data Modeling: Star/Snowflake schema, Dimensional modeling Performance Tuning: Spark job optimization, Data partitioning strategies Data Governance & Security: Azure Purview, RBAC, Data Masking Nice To Have Experience with Kafka, Event Hub, or other real-time streaming platforms Exposure to Power BI or other visualization tools Knowledge of Terraform or ARM templates for infrastructure as code Experience in MLOps and integration with MLFlow for model lifecycle management Certifications (Good To Have) Microsoft Certified: Azure Data Engineer Associate Databricks Certified Data Engineer Associate / Professional DP-203: Data Engineering on Microsoft Azure Soft Skills Strong communication and client interaction skills Analytical thinking and problem-solving Agile mindset with familiarity in Scrum/Kanban Team player with mentoring ability for junior engineers Skills: data partitioning strategies,azure functions,data analytics,unity catalog,rbac,databricks,elt,devops,azure data factory,delta lake,data factory,spark job optimization,job clusters,azure devops,etl/elt pipeline design and orchestration,data masking,azure key vault,azure databricks,azure data engineer,azure synapse,star/snowflake schema,azure data lake storage (gen2),git,sql,etl,snowflake,azure,python,azure cloud services,azure purview,pyspark,mlflow,ci/cd pipelines,dimensional modeling,sql server,big data technologies,azure monitor,azure synapse analytics,databricks notebooks Show more Show less

Posted 1 day ago

Apply

50.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Gap Inc. Our past is full of iconic moments - but our future is going to spark many more. Our brands - Gap, Banana Republic, Old Navy and Athleta - have dressed people from all walks of life and all kinds of families, all over the world, for every occasion for more than 50 years. But we're more than the clothes that we make. We know that business can and should be a force for good, and it's why we work hard to make product that makes people feel good, inside and out. It's why we're committed to giving back to the communities where we live and work. If you're one of the super-talented who thrive on change, aren't afraid to take risks and love to make a difference, come grow with us. About The Role The Manager of Supplier Management will lead the supplier relationship management function within the Accounts Payable (AP) team. This role is responsible for overseeing and managing the company's supplier base, ensuring timely and accurate vendor information, resolving supplier issues, and optimizing supplier payment processes. The ideal candidate will have a deep understanding of supplier management, AP processes, and strong leadership abilities. What You'll Do Supplier Relationship Management: Develop and maintain strong relationships with key suppliers, ensuring open and effective communication. Address and resolve supplier issues or disputes regarding invoicing, payments, and terms in a timely and professional manner. Work closely with suppliers to understand their needs and improve the overall supplier experience. Supplier Onboarding & Information Management: Lead the supplier onboarding process, ensuring that all relevant supplier information is gathered, verified, and entered into the system accurately. Regularly audit and update supplier information to ensure accuracy and compliance. Collaborate with procurement and legal teams to ensure all contracts and supplier agreements are aligned with company policies. Accounts Payable Collaboration: Collaborate with the AP team to ensure seamless processing of supplier invoices and payments, optimizing cash flow and vendor satisfaction. Oversee the resolution of any discrepancies between suppliers and internal teams (e.g., procurement, finance) to ensure timely payment. Work closely with AP teams to address supplier inquiries, track payment status, and resolve issues related to invoice processing and payment cycles. Process Improvement & Efficiency: Continuously assess and improve supplier management and AP processes to enhance efficiency, reduce errors, and increase automation. Implement and maintain best practices for managing supplier relationships, including effective communication, issue resolution, and performance metrics. Identify opportunities for process optimization within the AP team to support a faster, more efficient payment cycle. Supplier Performance Monitoring: Develop and implement metrics and KPIs to measure supplier performance, ensuring timely deliveries, adherence to terms, and quality standards. Track and report on supplier performance, escalating issues when necessary and working with vendors to improve outcomes. Reporting & Analysis: Generate regular reports on supplier activity, payment cycles, aging analysis, and discrepancies for senior leadership. Provide data-driven insights and recommendations to improve supplier management and accounts payable processes. Compliance & Risk Management: Ensure all supplier management activities comply with internal controls, accounting standards, and regulatory requirements. Identify potential risks in supplier relationships and take proactive steps to mitigate them. Collaboration with Cross-Functional Teams: Partner with procurement, legal, and treasury teams to ensure that supplier terms, contracts, and relationships align with corporate goals. Support cross-functional projects that require supplier coordination, such as system upgrades or new process implementation. Who You Are Bachelor's degree in Business, Finance, Accounting, or related field. 7+ years of experience in supplier management, accounts payable, or procurement, with at least 3 years in a managerial or leadership role. Strong knowledge of supplier relationship management, procurement processes, and accounts payable operations. Experience with ERP systems (e.g., SAP, Oracle, or similar), supplier management software, and advanced Excel skills. Excellent communication, negotiation, and interpersonal skills, with the ability to manage multiple stakeholder relationships effectively. Strong analytical skills and the ability to assess and improve processes. Demonstrated ability to manage a team, mentor and develop talent, and build cross-functional relationships. Knowledge of compliance regulations, internal controls, and audit processes. High attention to detail and the ability to work under pressure to meet deadlines in a fast-paced environment. Benefits At Gap Inc. One of the most competitive paid time off plans in the industry Comprehensive health coverage for employees, same-sex partners and their families Health and wellness program: free annual health check-ups, fitness center and Employee Assistance Program Comprehensive benefits to support the journey of parenthood Retirement planning assistance See more of the benefits we offer. Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity. Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title-Principal SDE Exp-8+ Years Location-Remote Responsibilities Principal SDE At Shakudo, we’re building the world’s first operating system for data and AI—a unified platform that streamlines powerful open-source and proprietary tools into a seamless, production-ready environment. We’re looking for a Principal Software Development Engineer to lead the development of full end-to-end applications on our platform. This role is ideal for engineers who love solving real customer problems, moving across the stack, and delivering high-impact solutions that showcase what’s possible on Shakudo. What You’ll Do • Design and build complete applications—from backend to frontend—using Shakudo and open-source tools like Neo4J, ollama, Spark, and many more • Solve real-world data and AI challenges with elegant, production-ready solutions • Collaborate with Product and Customer Engineering to translate needs into scalable systems • Drive architecture and design patterns for building on Shakudo—with high autonomy and self-direction • Set the standard for building efficient, reusable, and impactful solutions What You Bring • 8+ years building production systems across the stack • Strong backend and frontend experience (e.g. Python, React, TypeScript) • Familiarity with cloud infrastructure, Kubernetes, and data/AI tooling • A hands-on, solutions-first mindset and a passion for fast, high-quality delivery Why This Role You’ll lead by example, building flagship applications that demonstrate the power of Shakudo. This role offers high ownership, high impact, and the chance to shape how modern data and AI solutions are built. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

Ascendeum is looking for veterans with extensive hands-on experience in the field of data engineering to build cutting-edge solutions for large-scale data extraction, processing, storage, and retrieval. About Us: We provide AdTech strategy consulting to leading Internet websites and apps globally hosting over 200 million monthly worldwide audiences. Since 2015, our team of consultants and engineers have been consistently delivering intelligent solutions that enable enterprise-level websites and apps to maximize their digital advertising returns. Job Responsibilities: Understand long-term and short-term business requirements to precisely match them with the capabilities of different distributed storage and computing technologies from the plethora of options available in the ecosystem. Create complex data processing pipelines. Design scalable implementations of the models developed by our Data Scientists. Deploy data pipelines in production systems based on CICD practices. Create and maintain clear documentation on data models/schemas as well as transformation/validation rules. Troubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers. Desired Skills and Experience: 4+ years of overall industry experience building and deploying large scale data processing pipelines in a production environment Experience building data pipelines and data centric applications using distributed storage platforms such as HDFS, S3, NoSql databases (Hbase, Cassandra, etc); and distributed processing platforms such as Hadoop, Spark, Hive, Oozie, Airflow, etc. Hands on experience with MapR, Cloudera, Hortonworks, and/or Cloud (AWS EMR, Azure HDInsights, Qubole, etc.) based Hadoop distributions Practical experience working with well know data engineering tools and platforms Kafka, Spark, Hadoop Solid understanding of Data Modelling, ML and AI concepts Fluent in programming languages like Nodejs/Java/Python Education: B.E / B Tech /M tech / MS. Thank you for your interest in joining Ascendeum. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Engineer Experience: 5+ Years Location: Remote Contract Duration: Short Term Work Time: IST Shift Job Description We are seeking a skilled and experienced Senior Data Engineer to develop scalable and optimized data pipelines using the Databricks Lakehouse platform. The role requires proficiency in Apache Spark, PySpark, cloud data services (AWS, Azure, GCP), and solid programming knowledge in Python and Java. The engineer will collaborate with cross-functional teams to design and deliver high-performing data solutions. Responsibilities Data Pipeline Development Build efficient ETL/ELT workflows using Databricks and Spark for batch and streaming data Utilize Delta Lake and Unity Catalog for structured data management Optimize Spark jobs using tuning techniques such as caching, partitioning, and serialization Cloud-Based Implementation Develop and deploy data workflows on AWS (S3, EMR, Glue), Azure (ADLS, ADF, Synapse), and/or GCP (GCS, Dataflow, BigQuery) Manage and optimize data storage, access control, and orchestration using native cloud tools Implement data ingestion and querying with Databricks Auto Loader and SQL Warehousing Programming and Automation Write clean, reusable, and production-grade code in Python and Java Automate workflows using orchestration tools like Airflow, ADF, or Cloud Composer Implement testing, logging, and monitoring mechanisms Collaboration and Support Work closely with data analysts, scientists, and business teams to meet data requirements Support and troubleshoot production workflows Document solutions, maintain version control, and follow Agile/Scrum methodologies Required Skills Technical Skills Databricks: Experience with notebooks, cluster management, Delta Lake, Unity Catalog, and job orchestration Spark: Proficient in transformations, joins, window functions, and tuning Programming: Strong in PySpark and Java, with data validation and error handling expertise Cloud: Experience with AWS, Azure, or GCP data services and security frameworks Tools: Familiarity with Git, CI/CD, Docker (preferred), and data monitoring tools Experience 5–8 years in data engineering or backend development Minimum 1–2 years of hands-on experience with Databricks and Spark Experience with large-scale data migration, processing, or analytics projects Certifications (Optional but Preferred) Databricks Certified Data Engineer Associate Working Conditions Full-time remote work with availability during IST hours Occasional on-site presence may be required during client visits No regular travel required On-call support expected during deployment phases Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Data Engineer – Databricks, Delta Live Tables, Data Pipelines Location: Bhopal / Hyderabad / Pune (On-site) Experience Required: 5+ Years Employment Type: Full-Time Job Summary: We are seeking a skilled and experienced Data Engineer with a strong background in designing and building data pipelines using Databricks and Delta Live Tables. The ideal candidate should have hands-on experience in managing large-scale data engineering workloads and building scalable, reliable data solutions in cloud environments. Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines using Databricks and Delta Live Tables . Work with structured and unstructured data to enable analytics and reporting use cases. Implement data ingestion , transformation , and cleansing processes. Collaborate with Data Architects, Analysts, and Data Scientists to ensure data quality and integrity. Monitor data pipelines and troubleshoot issues to ensure high availability and performance. Optimize queries and data flows to reduce costs and increase efficiency. Ensure best practices in data security, governance, and compliance. Document architecture, processes, and standards. Required Skills: Minimum 5 years of hands-on experience in data engineering . Proficient in Apache Spark , Databricks , Delta Lake , and Delta Live Tables . Strong programming skills in Python or Scala . Experience with cloud platforms such as Azure , AWS , or GCP . Proficient in SQL for data manipulation and analysis. Experience with ETL/ELT pipelines , data wrangling , and workflow orchestration tools (e.g., Airflow, ADF). Understanding of data warehousing , big data ecosystems , and data modeling concepts. Familiarity with CI/CD processes in a data engineering context. Nice to Have: Experience with real-time data processing using tools like Kafka or Kinesis. Familiarity with machine learning model deployment in data pipelines. Experience working in an Agile environment. Show more Show less

Posted 1 day ago

Apply

50.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

About Gap Inc. Our past is full of iconic moments — but our future is going to spark many more. Our brands — Gap, Banana Republic, Old Navy and Athleta — have dressed people from all walks of life and all kinds of families, all over the world, for every occasion for more than 50 years. But we’re more than the clothes that we make. We know that business can and should be a force for good, and it’s why we work hard to make product that makes people feel good, inside and out. It’s why we’re committed to giving back to the communities where we live and work. If you're one of the super-talented who thrive on change, aren't afraid to take risks and love to make a difference, come grow with us. About The Role The Merchandiser, Product Development will be responsible for developing the garment samples for the assigned product / style while working closely with the Merchandising teams of the Vendors. As part of product development team s/he will also be providing support in identifying opportunities of cost saving, onboarding new vendors and executing the innovation agenda for the department in the assigned brand. What You'll Do Downloads tech pack and BOM Works with vendors for development sampling and initial costs Manages counter costing; Handling samples; Tracking fit approvals; Tracking lab dips Following up with vendor on system-updates Manages wear-test samples and tailoring samples Ensures Systems are updated with costs and details Coordinates MST legacy ID communications for purchase order Partners with brand Design, R&D and key suppliers to develop innovative and quality product Partners with Mill management, QA and technical teams to resolve fabric or quality issues Resolves issues within a timely manner while working to continuously improve and create internal and external processes and procedures Who You Are Merchandise Sourcing Knowledge – Experience in sample development and with offshore production exposure; in large-sized buying office or trading company Planning & Influencing – proven experience in plan, prioritize and influence at all levels Drive Results – ability to analyze situations and proactively suggest solutions to meet deliverables Learning Agility & Experimentation – demonstrates eagerness to learn and explore new ways of approaching goals Effective Communicator & Team Player - proven capability to communicate effectively, verbal and written Benefits at Gap Inc. One of the most competitive paid time off plans in the industry Comprehensive health coverage for employees, same-sex partners and their families Health and wellness program: free annual health check-ups, fitness center and Employee Assistance Program Comprehensive benefits to support the journey of parenthood Retirement planning assistance See more of the benefits we offer. Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity. Show more Show less

Posted 1 day ago

Apply

1.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Location: Gurugram Role : Sales Development Representative Experience: 1-4 years Working Hours: 10:30 AM - 8:00 PM, Monday to Friday (1st and 3rd Saturdays off) About Darwix AI Darwix AI is a GenAI-powered platform built for enterprise revenue teams across sales, support, credit, and retail. Our proprietary AI stack ingests multimodal inputs—voice calls, chat logs, emails, and CCTV streams—and delivers real-time nudges, conversation scoring, and performance analytics. Our product suite includes: Transform+ : Real-time conversational intelligence for contact centers and field sales Sherpa.ai : A multilingual GenAI assistant for live coaching, summaries, and objection handling Store Intel : A computer vision tool turning CCTV footage into actionable insights for retail teams Darwix AI is trusted by leading enterprises such as IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar, and Sobha Realty. We are backed by top institutional and operator investors and are rapidly scaling across India, the Middle East, and Southeast Asia. Role Summary This isn’t your average SDR role. You’ll be building pipeline at the enterprise level, targeting decision-makers, and shaping the revenue growth trajectory of Darwix AI. Key Responsibilities Identify and research high-value prospects across India, MENA, and the US Launch personalized outbound campaigns using email, LinkedIn, cold calls Book meetings with senior stakeholders: Heads of Sales, CXOs, and VPs Qualify inbound leads and convert interest into scheduled product demos Run rapid experiments on messaging, channels, and outreach techniques Contribute to shaping the SDR playbook and GTM strategies with the founding team What You Bring 1–4 years of experience in SaaS/enterprise B2B sales. Exceptional verbal and written communication skills. Resilience, curiosity, and a results-driven mindset. Ability to personalize outreach and spark conversations with senior leaders. Familiarity with tools like LinkedIn Sales Navigator, HubSpot, Apollo, Notion. A passion for GTM strategy, AI, and working at the frontlines of innovation. Tools You'll Use LinkedIn Sales Navigator | HubSpot | Apollo | Loom | Notion | Google Sheets | Cold Email Templates | GenAI Pitch Assist Tools Who You’ll Be Talking To Founders of high-growth startups Sales leaders at unicorns and scaling SaaS businesses CXOs at Fortune 500 companies Occasionally, VCs and portfolio heads Your mission: Book the meeting. Own the conversation. Crack the account. What we offer Competitive base salary + commissions + performance bonuses Real growth: path to AE, GTM Strategist, or Revenue Ops roles Direct mentorship from founders and leadership Deep exposure to enterprise SaaS sales, cold outreach, GTM planning Experience in scaling a startup across global markets This Is NOT: A sales support/back-office role A repetitive, dial-and-drop cold-calling job A “CRM admin” role that passes leads from one tool to another This is a core GTM role. You will build pipeline, shape our ARR, and grow with us. If you execute well, this role becomes the springboard for your SaaS sales career. Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role: SDE 2 - Data Website: www.trademo.com Location: Onsite - Gurgaon What will you be doing here? ● Responsible for the maintenance and growth of a 50TB+ data pipeline serving global SaaS products for businesses, including onboarding new data and collaborating with pre-sales to articulate technical solutions ● Solves complex problems across large datasets by applying algorithms, particularly within the domains of Natural Language Processing (NLP) and Large Language Models (LLM) ● Leverage bleeding-edge technology to work with large volumes of complex data ● Be hands-on in development - Python, Pandas, NumPy, ETL frameworks. ● Preferred exposure to distributed computing frameworks like Apache Spark, Kafka, Airflowj ● Along with individual data engineering contributions, actively help peers and junior team members on architecture and code to ensure the development of scalable, accurate, and highly available solutions ● Collaborate with teams and share knowledge via tech talks and promote tech and engineering best practices within the team. Requirement ● B-Tech/M-Tech in Computer Science from IIT or equivalent Tier 1 Colleges. ● 2+ years of relevant work experience in data engineering or related roles. ● Proven ability to efficiently work with a high variety and volume of data (50TB+ pipeline experience is a plus). ● Solid understanding and preferred exposure to NoSQL databases, including Elasticsearch, MongoDB, and GraphDB. ● Basic understanding of working within Cloud infrastructure and Cloud Native Apps (AWS, Azure, IBM , etc.). ● Exposure to core data engineering concepts and tools: Data warehousing, ETL processes, SQL, and NoSQL databases. ● Great problem-solving ability over a larger set of data and the ability to apply algorithms, with a plus for experience using NLP and LLM. ● Willingness to learn and apply new techniques and technologies to extract intelligence from data, with prior exposure to Machine Learning and NLP being a significant advantage. ● Sound understanding of Algorithms and Data Structures. ● Ability to write well-crafted, readable, testable, maintainable, and modular code. Desired Profile: ● A hard-working, humble disposition. ● Desire to make a strong impact on the lives of millions through your work. ● Capacity to communicate well with stakeholders as well as team members and be an effective interface between the Engineering and Product/Business team. ● A quick thinker who can adapt to a fast-paced startup environment and work with minimum supervision Show more Show less

Posted 1 day ago

Apply

Exploring Spark Jobs in India

The demand for professionals with expertise in Spark is on the rise in India. Spark, an open-source distributed computing system, is widely used for big data processing and analytics. Job seekers in India looking to explore opportunities in Spark can find a variety of roles in different industries.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities have a high concentration of tech companies and startups actively hiring for Spark roles.

Average Salary Range

The average salary range for Spark professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Salaries may vary based on the company, location, and specific job requirements.

Career Path

In the field of Spark, a typical career progression may look like: - Junior Developer - Senior Developer - Tech Lead - Architect

Advancing in this career path often requires gaining experience, acquiring additional skills, and taking on more responsibilities.

Related Skills

Apart from proficiency in Spark, professionals in this field are often expected to have knowledge or experience in: - Hadoop - Java or Scala programming - Data processing and analytics - SQL databases

Having a combination of these skills can make a candidate more competitive in the job market.

Interview Questions

  • What is Apache Spark and how is it different from Hadoop? (basic)
  • Explain the difference between RDD, DataFrame, and Dataset in Spark. (medium)
  • How does Spark handle fault tolerance? (medium)
  • What is lazy evaluation in Spark? (basic)
  • Explain the concept of transformations and actions in Spark. (basic)
  • What are the different deployment modes in Spark? (medium)
  • How can you optimize the performance of a Spark job? (advanced)
  • What is the role of a Spark executor? (medium)
  • How does Spark handle memory management? (medium)
  • Explain the Spark shuffle operation. (medium)
  • What are the different types of joins in Spark? (medium)
  • How can you debug a Spark application? (medium)
  • Explain the concept of checkpointing in Spark. (medium)
  • What is lineage in Spark? (basic)
  • How can you monitor and manage a Spark application? (medium)
  • What is the significance of the Spark Driver in a Spark application? (medium)
  • How does Spark SQL differ from traditional SQL? (medium)
  • Explain the concept of broadcast variables in Spark. (medium)
  • What is the purpose of the SparkContext in Spark? (basic)
  • How does Spark handle data partitioning? (medium)
  • Explain the concept of window functions in Spark SQL. (advanced)
  • How can you handle skewed data in Spark? (advanced)
  • What is the use of accumulators in Spark? (advanced)
  • How can you schedule Spark jobs using Apache Oozie? (advanced)
  • Explain the process of Spark job submission and execution. (basic)

Closing Remark

As you explore opportunities in Spark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this growing field and advance your career in the tech industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies