Home
Jobs

8023 Spark Jobs - Page 25

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

100.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Entity: Technology Job Family Group: IT&S Group Job Description: You will work with A multi-disciplinary squad, and will play a significant role in the design and up keeping of our businesses, customer focused business solutions and integration. Let me tell you about the role As a Senior Solution Architect, you will be responsible for connecting all the digital teams and the consumers and procurers of IT, to build a coordinated, flexible, effective IT architecture for bp's oil & gas application estate. You will also work with other data, integration and platform architects, who specialize in the respective areas, to build fit-for-purpose and multifaceted architecture. What you will deliver Architecture: You rigorously develop solution architectures, seeking practical solutions that optimize and re-use capabilities. You will be responsible for building technical designs of services or applications and will care passionately about the integrity of the IT capabilities you develop. Technology: You are an excellent technologist and have a passion for understanding and learning. You will add to digital transformation initiatives from an architectural perspective, facilitating the delivery of solutions. You will bring good hands-on skills in key technologies, and an ability to rapidly assess new technologies with a commercial approach. Data engineering and analytics: you will have the ability draw of insights from information / knowledge, spanning data analytics and data science, including business intelligence, machine learning pipelines and modelling, and other sophisticated analytics. Awareness of information modelling of data assets to their implementation in data pipelines, and the associated data processing and storage techniques. Safety and compliance: The safety of our people and customers is our highest priority. You will advocate and help ensure our architectures, designs and processes enhance a culture of operational safety and improve our digital security. Collaboration: You will play an integral role in establishing the team’s abilities while demonstrating your leadership values through delegation, motivation and trust. You will not just lead, but "do". You will build positive relationships across the business and Digital and advise and influence leaders on technology. You will act as a technology mentor within Digital teams and inspire people to engage with technology as a driver of change. You will understand the long-term needs of the solution you are developing, and enable delivery by building a rapport with team members both inside and outside of BP. What you will need to be successful (experience and qualifications) Technical Skills A Bachelor's (or higher) degree or equivalent work experience. A confirmed background in architecture with real-world experience of architecting. Deep-seated functional knowledge of key technology sets, e.g. application, infrastructure, cloud and data. Be part of a tight-knit delivery team. You accomplish outstanding project outcomes in a respectful and supportive culture. A proven grasp of architecture development and design thinking in an agile environment. You adapt delivery techniques to drive outstanding project delivery. Also capable in information architecture and data engineering / management processes, including data governance / modelling techniques and tools, processing methods and technologies. Capable in data analytics and data science architectures, including business intelligence, machine learning pipelines and modelling, and associated technologies. Desirable Skills Systems Design, Capacity Management, Network Design, Service Acceptance, Systems Development Management Programming Languages – Python, Scala, Spark variants Business Modelling, Business Risk Management, User Experience Analysis, Emerging Technology Monitoring, IT Strategy and Planning About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Additional Information We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Even though the job is advertised as full time, please contact the hiring manager or the recruiter as flexible working arrangements may be considered. Travel Requirement Negligible travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks. Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: Sr Data Scientist Experience level: 5 to 7 years Location: Chennai Who are we? Crayon Data is a leading provider of AI-led revenue acceleration solutions, headquartered in Singapore with a presence in India and the UAE. Founded in 2012, our mission is to simplify the world’s choices. Our flagship platform, maya.ai , helps enterprises in Banking, Fintech, and Travel unlock the value of their data to create hyperpersonalized experiences and drive sustainable revenue streams. maya.ai is powered by four “as a Service” components – Data, Recommendation, Customer Experience, and Marketplace – that work in unison to deliver tangible business outcomes. Why Crayon? Why now? Crayon is transforming into an AI first company, and every Crayon (that’s what we call ourselves!) is undergoing a journey of upskilling and expanding their capabilities in the AI space. We're building an organization where AI is not a department—it’s a way of thinking. If you’re an engineer who’s passionate about building things, experimenting with models, and applying AI to solve real business problems, you’ll feel right at home in our AI squads. Our environment is designed to be a playground for AI practitioners, with access to meaningful data, real-world challenges, and the freedom to innovate. You won't just be writing models—you’ll be shaping Crayon’s future. Experience : 5+ years Industry : Banking, Financial Services, and AI Team : Data Science Job Overview We are seeking a Senior Data Scientist who thrives at the intersection of business and machine learning. In this role, you will develop and deploy data science models that directly drive outcomes for banking and financial institutions – from boosting sales and cross-sell opportunities to reducing attrition and churn. If you love solving real-world business problems using data and turning insights into action, this is the role for you. What You’ll Do Build and deploy machine learning and statistical models for key banking use cases: cross-sell, upsell, churn prediction, customer segmentation, lead scoring, etc. Work with large-scale structured and unstructured datasets to derive meaningful insights. Translate business problems into analytical solutions and guide clients through data-driven decision-making. Collaborate closely with product, engineering, and consulting teams to deliver production-ready models. Continuously monitor, tune, and improve model performance post-deployment. Mentor junior data scientists and contribute to internal knowledge sharing. Can you say “Yes, I have!” to the following? 5+ years of experience developing ML/AI solutions on large-scale datasets Strong academic background – B.E/B.Tech/MS in Machine Learning, Computer Science, Statistics, Applied Math, or related fields; PhD is a bonus Deep understanding of statistical models – hierarchical, stochastic, time series, survival, and econometric Expertise in at least one deep learning framework – PyTorch, TensorFlow, or MxNet Proficiency in Spark (Scala/PySpark) for large-scale data processing and designing scalable ML pipelines Experience contributing to open-source ML libraries or publishing research is a strong plus. Can you say “Yes, I will!” to the following? Innovate with AI-first thinking Champion scalable, production-grade ML solutions Collaborate across teams to deliver outcomes aligned with business value Stay curious, keep learning, and mentor junior team members Brownie points for: Alignment with The Crayon Box of Values – because while skills can be learned, values are who we are. Passion for building reusable ML components and internal tools Come play, build, and grow with us. # Let’s co-create the future of AI at Crayon . Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Senior Data Scientist — Gen AI/ML Expert Location: Hybrid — Gurugram Company: Mechademy – Industrial Reliability & Predictive Analytics About Mechademy At Mechademy, we are redefining the future of reliability in rotating machinery with our flagship product, Turbomechanica . Built at the intersection of physics-based models, AI, and machine learning , Turbomechanica delivers prescriptive analytics that detect potential equipment issues before they escalate, maximizing uptime, extending asset life, and reducing operational risks for our industrial clients. The Role We are seeking a talented and driven Senior Data Scientist (AI/ML) with 3+ years of experience to join our AI team. You will play a critical role in building scalable ML pipelines, integrating cutting-edge language models, and developing autonomous agent-based systems that transform predictive maintenance is done for industrial equipment. This is a highly technical and hands-on role, with strong emphasis on real-world AI deployments — working directly with sensor data, time-series analytics, anomaly detection, distributed ML, and LLM-powered agentic workflows . What Makes This Role Unique Work on real-world industrial AI problems , combining physics-based models with modern ML/LLM systems. Collaborate with domain experts, engineers, and product leaders to directly impact critical industrial operations. Freedom to experiment with new tools, models, and techniques — with full ownership of your work. Help shape our technical roadmap as we scale our AI-first predictive analytics platform. Flexible hybrid work culture with high-impact visibility. Key Responsibilities Design & Develop ML Pipelines: Build scalable, production-grade ML pipelines for predictive maintenance, anomaly detection, and time-series analysis. Distributed Model Training: Leverage distributed computing frameworks (e.g. Ray, Dask, Spark, Horovod) for large-scale model training. LLM Integration & Optimization: Fine-tune, optimize, and deploy large language models (Llama, GPT, Mistral, Falcon, etc.) for applications like summarization, RAG (Retrieval-Augmented Generation), and knowledge extraction. Agent-Based AI Pipelines: Build intelligent multi-agent systems capable of reasoning, planning, and executing complex tasks via tool usage, memory, and coordination. End-to-End MLOps: Own the full ML lifecycle — from research, experimentation, deployment, monitoring to production optimization. Algorithm Development: Research, evaluate, and implement state-of-the-art ML/DL/statistical algorithms for real-world sensor data. Collaborative Development: Work closely with cross-functional teams including software engineers, domain experts, product managers, and leadership. Core Requirements 3+ years of professional experience in AI/ML, data science, or applied ML engineering. Strong hands-on experience with modern LLMs (Llama, GPT series, Mistral, Falcon, etc.), fine-tuning, prompt engineering, and RAG techniques. Familiarity with frameworks like LangChain, LlamaIndex , or equivalent for LLM application development. Practical experience in agentic AI pipelines : tool use, sequential reasoning, and multi-agent orchestration. Strong proficiency in Python (Pandas, NumPy, Scikit-learn) and at least one deep learning framework (TensorFlow, PyTorch, or JAX). Exposure to distributed ML frameworks (Ray, Dask, Horovod, Spark ML, etc.). Experience with containerization and orchestration (Docker, Kubernetes). Strong problem-solving ability, ownership mindset, and ability to work in fast-paced startup environments. Excellent written and verbal communication skills. Bonus / Good to Have Experience with time-series data, sensor data processing, and anomaly detection. Familiarity with CI/CD pipelines and MLOps best practices. Knowledge of cloud deployment, real-time system optimization, and industrial data security standards. Prior open-source contributions or active GitHub projects. What We Offer Opportunity to work on cutting-edge technology transforming industrial AI. Direct ownership, autonomy, and visibility into product impact. Flexible hybrid work culture. Professional development budget and continuous learning opportunities. Collaborative, fast-moving, and growth-oriented team culture. Health benefits and performance-linked rewards. Potential for equity participation for high-impact contributors. Note: Title and compensation will be aligned with the candidate’s experience and potential impact. Show more Show less

Posted 2 days ago

Apply

1.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Summary: We are looking for a passionate and detail-oriented ETL Developer with 1 to 4 years of experience in building, testing, and maintaining ETL processes. The ideal candidate should have a strong understanding of data warehousing concepts, ETL tools, and database technologies. Key Responsibilities: ✅ Design, develop, and maintain ETL workflows and processes using \[specify tools e.g., Informatica / Talend / SSIS / Pentaho / custom ETL frameworks]. ✅ Understand data requirements and translate them into technical specifications and ETL designs. ✅ Optimize and troubleshoot ETL processes for performance and scalability. ✅ Ensure data quality, integrity, and security across all ETL jobs. ✅ Perform data analysis and validation for business reporting. ✅ Collaborate with Data Engineers, DBAs, and Business Analysts to ensure smooth data operations. Required Skills: •⁠ ⁠1-4 years of hands-on experience with ETL tools (e.g., *Informatica, Talend, SSIS, Pentaho*, or equivalent). •⁠ ⁠Proficiency in SQL and experience working with RDBMS (e.g., SQL Server, Oracle, MySQL, PostgreSQL). •⁠ ⁠Good understanding of data warehousing concepts and data modeling. •⁠ ⁠Experience in handling large datasets and performance tuning of ETL jobs. •⁠ ⁠Ability to work in Agile environments and participate in code reviews. •⁠ ⁠Ability to learn and work with open-source languages like Node.js and AngularJS. Preferred Skills (Good to Have): •⁠ ⁠Experience with cloud ETL solutions (AWS Glue, Azure Data Factory, GCP Dataflow). •⁠ ⁠Exposure to big data ecosystems (Hadoop, Spark). Qualifications: 🎓 Bachelor’s degree in Computer Science, Engineering, Information Technology, or related field. Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Job Title – Senior Data Scientist Candidate Specification – 10+ years, Notice Period – Immediate to 30 days, Hybrid. Job Summary We are seeking a highly skilled and experienced Senior Data Scientist to join our advanced analytics team. The ideal candidate will possess strong statistical and machine learning expertise, hands-on programming skills, and the ability to transform data into actionable business insights. This role also requires domain understanding to align data science efforts with business objectives in industries such as Oil & Gas, Pharma, Automotive, Desalination, and Industrial Equipment . Primary Responsibilities Lead the design, development, and deployment of advanced machine learning and statistical models Analyze large, complex datasets to uncover trends, patterns, and actionable insights Collaborate cross-functionally with business, engineering, and domain teams to define analytical problems and deliver impactful solutions Apply deep understanding of business objectives to drive the application of data science in decision-making Ensure the quality, integrity, and governance of data used for modeling and analytics Guide junior data scientists and review code and models for scalability and accuracy Core Competencies (Primary Skills) Statistical Analysis & Mathematics Strong foundation in probability, statistics, linear algebra, and calculus Experience with hypothesis testing, A/B testing, and regression models Machine Learning & Deep Learning Proficient in supervised/unsupervised learning, ensemble techniques Hands-on experience with neural networks, NLP, and computer vision Business Acumen & Domain Knowledge Proven ability to translate business needs into data science solutions Exposure to domains such as Oil & Gas, Pharma, Automotive, Desalination, and Industrial Pumps/Motors Technical Proficiency Programming Languages: Python, R, SQL Libraries & Tools: Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch Data Visualization: Matplotlib, Seaborn, Plotly, Tableau, Power BI MLOps & Deployment: Docker, Kubernetes, MLflow, Airflow Cloud & Big Data (Preferred): AWS, GCP, Azure, Spark, Hadoop, Hive, Presto Secondary Skills (Preferred) Generative AI: GPT-based models, fine-tuning, open-source LLMs, Agentic AI frameworks Project Management: Agile methodologies, sprint planning, stakeholder communication Skills Required RoleSenior Data Scientist - Contract Hiring Industry TypeIT/ Computers - Software Functional Area Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills DEEP LEARNING MACHINE LAEARNING PYHTON S TATISTICAL ANALYSIS Other Information Job CodeGO/JC/375/2025 Recruiter NameChristopher Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

About The Opportunity We operate at the forefront of India’s Artificial Intelligence & Enterprise Software Solutions sector, building production-grade, large-language-model (LLM) applications that power real-time search, recommendation, and decision-support systems for Fortune-500 clients. Our fully remote engineering pods in Mumbai and Pune transform cutting-edge GenAI research into scalable business value while nurturing a culture of ownership, learning, and rapid iteration. Role & Responsibilities Design and ship GenAI products that fuse Retrieval-Augmented Generation (RAG) with LangChain/LangGraph pipelines for chatbots, semantic search, and agentic workflows. Implement vector-based retrieval by orchestrating FAISS-backed indexes, chunking strategies, and prompt-engineering playbooks that boost LLM precision and recall. Prototype and harden ML models (classification, regression, clustering) in Scikit-learn or PyTorch, then productionise via micro-checkpointing (MCP) and CI/CD. Instrument agentic behaviours that call external tools/APIs, manage memory, and evaluate reasoning traces for safety and ROI. Collaborate cross-functionally with product, design, and MLOps to translate business stories into measurable AI metrics and A/B experiments. Author technical docs & knowledge share to uplevel team expertise in GenAI best practices and responsible-AI compliance. Skills & Qualifications Must-Have 3–7 yrs hands-on experience building LLM-powered applications with LangChain and/or LangGraph. Proven mastery of FAISS (or Pinecone/Weaviate) for vector search, plus solid understanding of embeddings and cosine-similarity maths. Strong foundation in machine-learning algorithms—classification, regression, and model evaluation—with production code in Scikit-learn or equivalent. Ability to craft, debug, and optimise prompt engineering & chunking strategies that minimise token cost while maximising answer quality. Fluency in Python; familiarity with software-engineering best practices (Git, unit tests, Docker, MCP-style model checkpoints). Excellent written and verbal communication skills to explain complex GenAI concepts to technical and non-technical stakeholders. Preferred Experience designing agentic frameworks (tool-calling, planning-&-execution loops, reflection) for autonomous task chains. Prior contribution to open-source GenAI libraries or research publications. Exposure to data-pipeline tooling such as Airflow, Spark, or cloud-agnostic serverless runtimes. Skills: GenAI,LangChain,LLM,LangGraph,FAISS,MCP,Agentic,Machine Learning,Classification,Regression,ScikitLearn Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Linkedin logo

About The Opportunity We operate at the forefront of India’s Artificial Intelligence & Enterprise Software Solutions sector, building production-grade, large-language-model (LLM) applications that power real-time search, recommendation, and decision-support systems for Fortune-500 clients. Our fully remote engineering pods in Mumbai and Pune transform cutting-edge GenAI research into scalable business value while nurturing a culture of ownership, learning, and rapid iteration. Role & Responsibilities Design and ship GenAI products that fuse Retrieval-Augmented Generation (RAG) with LangChain/LangGraph pipelines for chatbots, semantic search, and agentic workflows. Implement vector-based retrieval by orchestrating FAISS-backed indexes, chunking strategies, and prompt-engineering playbooks that boost LLM precision and recall. Prototype and harden ML models (classification, regression, clustering) in Scikit-learn or PyTorch, then productionise via micro-checkpointing (MCP) and CI/CD. Instrument agentic behaviours that call external tools/APIs, manage memory, and evaluate reasoning traces for safety and ROI. Collaborate cross-functionally with product, design, and MLOps to translate business stories into measurable AI metrics and A/B experiments. Author technical docs & knowledge share to uplevel team expertise in GenAI best practices and responsible-AI compliance. Skills & Qualifications Must-Have 3–7 yrs hands-on experience building LLM-powered applications with LangChain and/or LangGraph. Proven mastery of FAISS (or Pinecone/Weaviate) for vector search, plus solid understanding of embeddings and cosine-similarity maths. Strong foundation in machine-learning algorithms—classification, regression, and model evaluation—with production code in Scikit-learn or equivalent. Ability to craft, debug, and optimise prompt engineering & chunking strategies that minimise token cost while maximising answer quality. Fluency in Python; familiarity with software-engineering best practices (Git, unit tests, Docker, MCP-style model checkpoints). Excellent written and verbal communication skills to explain complex GenAI concepts to technical and non-technical stakeholders. Preferred Experience designing agentic frameworks (tool-calling, planning-&-execution loops, reflection) for autonomous task chains. Prior contribution to open-source GenAI libraries or research publications. Exposure to data-pipeline tooling such as Airflow, Spark, or cloud-agnostic serverless runtimes. Skills: GenAI,LangChain,LLM,LangGraph,FAISS,MCP,Agentic,Machine Learning,Classification,Regression,ScikitLearn Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Description Skills & Qualifications 4+ years of experience as a Python developer with strong client communication skills and team leadership experience. In-depth knowledge of Python frameworks such as Django, Flask, or FastAPI. Strong expertise in cloud technologies (AWS, Azure, GCP). Deep understanding of microservices architecture, multi-tenant architecture, and best practices in Python development. Familiarity with serverless architecture and frameworks like AWS Lambda or Azure Functions. Experience with deployment using Docker, Nginx, Gunicorn, Uvicorn, and Supervisor. Hands-on experience with SQL and NoSQL databases such as PostgreSQL and AWS DynamoDB. Proficiency with Object Relational Mappers (ORMs) like SQLAlchemy and Django ORM. Demonstrated ability to handle multiple API integrations and write modular, reusable code. Experience with frontend technologies such as React, Vue, HTML, CSS, and JavaScript to enhance full-stack development capabilities. Strong knowledge of user authentication and authorization mechanisms across multiple systems and environments. Familiarity with scalable application design principles and event-driven programming in Python. Solid experience in unit testing, debugging, and code optimization. Hands-on experience with modern software development methodologies, including Agile and Scrum. Familiarity with container orchestration tools like Kubernetes. Understanding of data processing frameworks such as Apache Kafka and Spark (Good to have). Experience with CI/CD pipelines and automation tools like Jenkins, GitLab CI, or CircleCI. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

6.0 years

0 Lacs

India

On-site

Linkedin logo

It takes powerful technology to connect our brands and partners with an audience of hundreds of millions of people. Whether you’re looking to write mobile app code, engineer the servers behind our massive ad tech stacks, or develop algorithms to help us process trillions of data points a day, what you do here will have a huge impact on our business—and the world. About Us Yahoo delivers delightful, inspiring and entertaining daily-habit experiences to over half a billion people worldwide. Our products include the Yahoo Homepage (www.yahoo.com), AOL, as well as Comscore #1 sites in News, Sports and Finance. Yahoo in Three Words: Inform, connect, and entertain. The Enterprise Application team is responsible for managing the financial systems along with other custom home grown applications which cater to the needs of the financial teams. We build and maintain applications to ensure Yahoo is able to serve the customers and finance teams, using Oracle R12 and a combination of open source software and internal tools. We encourage new ideas and continuously experiment and evaluate new technologies to assimilate them into our infrastructure. Our team structure encourages trust, learning from one another, having fun, and attracting people who are passionate about what they do. About You You are a self-starter and problem solver, who is passionate about velocity, developer productivity and product quality. You are an aggressive trouble-shooter who can multitask on problems of varying difficulty, priority and time-sensitivity and get things done. You are smart, self-driven, and spend time trying to figure out how something works, not stopping with knowing just what it does. You like to relentlessly automate everything and anything at scale. Job Responsibilities/The Role/The Job This position is for a Production Engineer II with extensive experience in the support & administration of complex applications/systems deployment, infrastructure upgrades, software upgrades, patching and ongoing end-to-end support of mission critical applications. Some of these applications are home grown custom applications facing Yahoo’s internal customer and others are Corporate Sites facing Yahoo’s external customers. This position will be responsible for defining, implementing and maintaining the standard operating procedures for the operations team within the Corporate Applications group. This position will partner closely with relevant business process owners, application developers in Bangalore and Sunnyvale and other Corporate Applications team members to deliver global solutions with an objective of optimizing processes. The individual must have solid experience and understanding of system, database & integration technologies and be responsible for 24/7/365 availability, scalability and incident response. Responsibilities include: Understand existing project design, monitoring setup, and automation. Providing expert advice and direction in Applications & database administration and configuration technologies that include host configuration, monitoring, change & release management, performance tuning, hardware & capacity planning, and upgrades. Design tools for managing the infrastructure and program clean & re-usable simple codes. Troubleshoot, resolve, and document production issues and escalate as required. Proactively maintain and develop all Linux infrastructure technology to maintain a 24x7x365 uptime service Develop and implement automation tools for managing production systems. Be part of global on-call (12/7) rotation. Being responsible for database design, performance, and monitoring of various versions of MySQL or SQL Server databases, database tools, and services Problem diagnosis and resolution of moderate to advance production issues Develop and deploy platform infrastructure tools such as monitoring, alerting, and orchestration Build independent web-based tools, microservices, and solutions. Writing reusable, testable, and efficient code Ability to design and develop a business operations model for large applications to provide support for business needs. Experience in dealing with difficult situations and making decisions with a sense of urgency. Monitoring and reporting metrics related to performance, availability, and other SLA measures Developing, implementing, and maintaining change control and testing processes for modifications to all applications environments Design and implement redundant systems, policies, and procedures for disaster recovery and data archiving to ensure effective protection and integrity of data assets Work with application development staff to harden, enhance, document, and generally improve the operability of our systems Minimum Job Qualifications Bachelor's degree in Computer Science, Engineering, Information Systems or similar relevant degree 6 to 8 years of experience in Linux systems, web applications, distributed computing, and computer networking. Hands-on in various DevOps tools like GIT, Jenkins, Ansible, Terraform, Docker, Jira, Slack, Confluence, Nagios, and Kubernetes Experience in container orchestration services, especially Kubernetes Fair understanding of major public cloud service providers, like Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, and private cloud like OpenStack Expert in Python, with knowledge of at least one Python web framework such as Django / Flask Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Understanding of databases - Relational and Non-Relational - their data models and Performance trade-offs. Hands-on experience in MySQL is preferred. In-depth knowledge of Linux: RedHat, CentOS, etc. Linux certifications (RHCT, RHCE, and LPIC) will be considered an advantage. Excellent communication, interpersonal, and team working skills. Good coding skills in BASH, Python, and Perl Experience in developing web applications and familiarity with at least one framework (Django, Flask) is desired. Basic web development skills using HTML5, CSS are mandatory. Strong desire to learn and understand new concepts, technologies, systems as part of day-to-day work. Solid knowledge of principles, concepts, and theories of virtual infrastructure and container platform orchestration. Ability to apply independent judgment to develop creative, practical, and repeatable solutions Knowledge of Hadoop, HBase, spark is preferred Working knowledge of HTTP, DNS, and DHCP is preferred. Important notes for your attention Applications: All applicants must apply for Yahoo openings direct with Yahoo. We do not authorize any external agencies in India to handle candidates’ applications. No agency nor individual may charge candidates for any efforts they make on an applicant’s behalf in the hiring process. Our internal recruiters will reach out to you directly to discuss the next steps if we determine that the role is a good fit for you. Selected candidates will go through formal interviews and assessments arranged by Yahoo direct. Offer Distributions: Our electronic offer letter and documents will be issued through our system for e-signatures, not via individual emails. Yahoo is proud to be an equal opportunity workplace. All qualified applicants will receive consideration for employment without regard to, and will not be discriminated against based on age, race, gender, color, religion, national origin, sexual orientation, gender identity, veteran status, disability or any other protected category. Yahoo will consider for employment qualified applicants with criminal histories in a manner consistent with applicable law. Yahoo is dedicated to providing an accessible environment for all candidates during the application process and for employees during their employment. If you need accessibility assistance and/or a reasonable accommodation due to a disability, please submit a request via the Accommodation Request Form (www.yahooinc.com/careers/contact-us.html) or call +1.866.772.3182. Requests and calls received for non-disability related issues, such as following up on an application, will not receive a response. Yahoo has a high degree of flexibility around employee location and hybrid working. In fact, our flexible-hybrid approach to work is one of the things our employees rave about. Most roles don’t require specific regular patterns of in-person office attendance. If you join Yahoo, you may be asked to attend (or travel to attend) on-site work sessions, team-building, or other in-person events. When these occur, you’ll be given notice to make arrangements. If you’re curious about how this factors into this role, please discuss with the recruiter. Currently work for Yahoo? Please apply on our internal career site. Show more Show less

Posted 2 days ago

Apply

3.0 - 5.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Position summary: We are seeking a Senior Software Development Engineer – Data Engineering with 3-5 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. Key Responsibilities: Work with cloud-based data solutions (Azure, AWS, GCP). Implement data modeling and warehousing solutions. Developing and maintaining data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Designing and optimizing data storage solutions, including data warehouses and data lakes. Ensuring data quality and integrity through data validation, cleansing, and error handling. Collaborating with data analysts, data architects, and software engineers to understand data requirements and deliver relevant data sets (e.g., for business intelligence). Implementing data security measures and access controls to protect sensitive information. Monitor and troubleshoot issues in data pipelines, notebooks, and SQL queries to ensure seamless data processing. Develop and maintain Power BI dashboards and reports. Work with DAX and Power Query to manipulate and transform data. Basic Qualifications Bachelor’s or master’s degree in computer science or data science 3-5 years of experience in data engineering, big data processing, and cloud-based data platforms. Proficient in SQL, Python, or Scala for data manipulation and processing. Proficient in developing data pipelines using Azure Synapse, Azure Data Factory, Microsoft Fabric. Experience with Apache Spark, Databricks and Snowflake is highly beneficial for handling big data and cloud-based analytics solutions. Preferred Qualifications Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub). Experience in BI and analytics tools (Tableau, Power BI, Looker). Familiarity with data observability tools (Monte Carlo, Great Expectations). Contributions to open-source data engineering projects.

Posted 2 days ago

Apply

3.0 - 7.0 years

6 - 8 Lacs

Pune, Chennai

Work from Office

Naukri logo

Responsible for application and solution development Translate complex functional and technical requirements into detailed design Document detailed application specifications, translate technical requirements into programmed application modules Identify and resolve application code related issues Write Junit test cases and perform unit testing Be passionate to learn and adapt to new technologies and challenges Work with minimum guidance from team leads to achieve timely delivery Follow the development processes and be able to work in Agile-Scrum environment Ensure that quality of the application is as per expectations Education Engineering Degree BE / BTech Technical certification in multiple technologies is desirable Mandatory Skills Should have working experience of Java REST Web API + Spark Good knowledge to implement code quality tools like SONAR, Checkstyle, PMD, etc. Good knowledge to develop applications using Java/J2EE, Spring framework, RESTful Web Services, Spark Desired Skills Proficiency in Java/Scala will be a plus

Posted 2 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description JOB DESCRIPTION We are seeking a passionate and experienced Senior Manager of Business Intelligence & Data Engineering to lead and develop a high-performing team of engineers. The scope of this role will be broad and multi-tiered, covering all aspects of the Business Intelligence (BI) ecosystem - designing, building, and maintaining robust data pipelines, enabling advanced analytics, and delivering actionable insights through BI and data visualization tools. You will play a critical role in fostering a collaborative and innovative team environment, while driving continuous improvement across all aspects of the engineering process. Also - key to the success of this role will be an assertiveness and willingness to engage directly with stakeholders, developing relationships while acquiring deep understanding of functional domains (business processes, etc.). Key Responsibilities Lead the design and development of scalable, high-performance data architectures on AWS, leveraging services such as S3, EMR, Glue, Redshift, Lambda, and Kinesis. Architect and manage Data Lakes for handling structured, semi-structured, and unstructured data. Manage Snowflake for cloud data warehousing, ensuring seamless data integration, optimization of queries, and advanced analytics. Implement Apache Iceberg in Data Lakes for managing large-scale datasets with ACID compliance, schema evolution, and versioning. Drive Data Modeling and Productization: Design and implement data models (e.g., star/snowflake schemas) to support analytical use cases, productizing datasets for business consumption and downstream analytics. Work with business stakeholders to create actionable insights using enterprise BI platforms (MicroStrategy, Tableau, Power BI, etc.). Build data models and dashboards that drive key business decisions, ensuring that data is easily accessible and interpretable. Ensure that data pipelines, architectures, and systems are thoroughly documented and follow coding and design best practices. Promote knowledge-sharing across the team to maintain high standards for quality and scalability. Call upon breadth of experience spanning many technologies and platforms to help shape architectural direction Assist end-users in optimizing their analytic usage, visualizing data in a more efficient and actionable fashion, beyond data dumps and grid reports Promote ongoing adoption of business intelligence content through an emphasis on user experience, iterative design refinement and regular training Implement Observability and Error Handling: Build frameworks for operational monitoring, error handling, and data quality assurance to ensure high reliability and accountability across the data ecosystem. Stay Ahead of Industry Trends: Keep abreast of the latest techniques, methods, and technologies in data engineering and BI, ensuring the team adopts cutting-edge tools and practices to maintain a competitive edge. Qualifications 10+ years of experience in Data Engineering or a related field, with a proven track record of designing, implementing, and maintaining large-scale distributed data systems 5+ years of work experience in BI/data visualization/analytics 5+ years of people management experience with experience managing global teams Track record of solving business challenges through technical solutions. Be able to articulate the context behind projects and their impact. Knowledge of CI/CD tools and practices, particularly in data engineering environments Proficiency in cloud-based data warehousing, data modeling, and query optimization Experience with AWS services (e.g., Lambda, Redshift, Athena, Glue, S3) and managing cloud infrastructure Strong experience in Data Lake architectures on AWS, using services like S3, Glue, EMR, and data management platforms like Apache Iceberg Familiarity with containerization tools like Docker and Kubernetes for managing cloud-based services Hands-on experience with Apache Spark (Scala & PySpark) for distributed data processing and real-time analytics Expertise in SQL for querying relational and NoSQL databases, and experience with database design and optimization Proficiency in creating interactive dashboards and reports using drag-and-drop interfaces in enterprise BI platforms, with a focus on user-friendly design for both technical and non-technical stakeholders Experience in microservices-based architectures, messaging, APIs, and distributed systems. Familiarity with embedding BI content into applications or websites using APIs (e.g., Power BI Embedded, MicroStrategy’s HyperIntelligence for zero-code embedding, Tableau’s robust APIs) Able to work in a collaborative environment to support rapid development and delivery of results Exhibit an understanding of business problems and translate those into creative, innovative and practical solutions that deliver high quality services to the business Strong communication and presentation skills, with experience delivering insights to both technical and executive audiences Willing to wear many hats and be flexible with a varying nature of tasks and responsibilities BONUS POINTS Understanding of data science and machine learning concepts, with the ability to collaborate with data science teams Knowledge of Infrastructure as Code (IaC) practices, using tools like Terraform to provision and manage cloud infrastructure (e.g., AWS) for data pipelines and BI systems Familiarity with data governance, security, and compliance practices in cloud environments Domain understanding of Apparel, Retail, Manufacturing, Supply Chain or Logistics About Us Fanatics is building a leading global digital sports platform. We ignite the passions of global sports fans and maximize the presence and reach for our hundreds of sports partners globally by offering products and services across Fanatics Commerce, Fanatics Collectibles, and Fanatics Betting & Gaming, allowing sports fans to Buy, Collect, and Bet. Through the Fanatics platform, sports fans can buy licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods; collect physical and digital trading cards, sports memorabilia, and other digital assets; and bet as the company builds its Sportsbook and iGaming platform. Fanatics has an established database of over 100 million global sports fans; a global partner network with approximately 900 sports properties, including major national and international professional sports leagues, players associations, teams, colleges, college conferences and retail partners, 2,500 athletes and celebrities, and 200 exclusive athletes; and over 2,000 retail locations, including its Lids retail stores. Our more than 22,000 employees are committed to relentlessly enhancing the fan experience and delighting sports fans globally. About The Team Fanatics Commerce is a leading designer, manufacturer, and seller of licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods. It operates a vertically-integrated platform of digital and physical capabilities for leading sports leagues, teams, colleges, and associations globally – as well as its flagship site, www.fanatics.com. Fanatics Commerce has a broad range of online, sports venue, and vertical apparel partnerships worldwide, including comprehensive partnerships with leading leagues, teams, colleges, and sports organizations across the world—including the NFL, NBA, MLB, NHL, MLS, Formula 1, and Australian Football League (AFL); the Dallas Cowboys, Golden State Warriors, Paris Saint-Germain, Manchester United, Chelsea FC, and Tokyo Giants; the University of Notre Dame, University of Alabama, and University of Texas; the International Olympic Committee (IOC), England Rugby, and the Union of European Football Associations (UEFA). At Fanatics Commerce, we infuse our BOLD Leadership Principles in everything we do: Build Championship Teams Obsessed with Fans Limitless Entrepreneurial Spirit Determined and Relentless Mindset Show more Show less

Posted 2 days ago

Apply

1.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. What you’ll do This position is at the forefront of Equifax's post cloud transformation, focusing on developing and enhancing Java applications within the Google Cloud Platform (GCP) environment. The ideal candidate will combine strong Java development skills with cloud expertise to drive innovation and improve existing systems Key Responsibilities Design, develop, test, deploy, maintain, and improve software applications on GCP Enhance existing applications and contribute to new initiatives leveraging cloud-native technologies Implement best practices in serverless computing, microservices, and cloud architecture Collaborate with cross-functional teams to translate functional and technical requirements into detailed architecture and design Participate in code reviews and maintain high development and security standards Provide technical oversight and direction for Java and GCP implementations What Experience You Need Bachelor's or Master's degree in Computer Science or equivalent experience 1+ years of IT experience with a strong focus on Java development Experience in modern Java development and cloud computing concepts Familiarity with agile methodologies and test-driven development (TDD) Strong understanding of software development best practices, including continuous integration and automated testing What could set you apart Experience with GCP or other cloud platforms (AWS, Azure) Active cloud certifications (e.g., Google Cloud Professional certifications) Experience with big data technologies (Spark, Kafka, Hadoop) and NoSQL databases Knowledge of containerization and orchestration tools (Docker, Kubernetes) Familiarity with financial services industry Experience with open-source frameworks (Spring, Ruby, Apache Struts, etc.) Experience with Python We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

Job Description Job Title: Data Support Specialist Location: Remote Candidate Expectation Candidate should have 2+ years of experience in Data support. Job Description Candidate should have 2+ years of experience as a data or quality assurance analyst, ideally working with SQL, PySpark, and/or Python Should have strong attention to detail and are a methodical problem-solver Should have excellent oral and written communication skills, with the ability to interact effectively with internal teams across time zones and cultures Should strive to make tasks as efficient as possible Should be enthusiastic about making a big impact at a rapidly growing company Should have experience working with web-scraped data, transaction data, or email data, though this is not required. Skills Required RoleData Support Specialist - Remote Industry TypeIT/ Computers - Software Functional Area Required Education B E Employment TypeFull Time, Permanent Key Skills DATA SUPPORT PY SPARK PYT HO N Other Information Job CodeGO/JC/166/2025 Recruiter NameDevikala D Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Department: Machining and Toolroom (Multiple roles) Job Type: Permanent Work Location: Hosur Interview Location: Coimbatore (Face-to-face round after shortlisting) Education: Any Degree, Diploma, B.E/B.Tech, NTTF, GTTC 1. Conventional Milling Operator Eligibility: 5–7 years of experience Operation of Vertical Turret Milling Machines, Geared Conventional Lathes, Surface Grinding (Vertical & Horizontal), and Radial Drilling Machines 2. Surface Grinding Operator Eligibility: 3–5 years of experience Operation of Surface Grinding (Vertical & Horizontal), Radial Drilling, Tapping, Tool & Cutter Grinding, Pedestal and Drill Bit Grinding Machines 3. VMC Operator (Vertical Machining Centre) Eligibility: 5–7 years of experience Operation of CNC Milling Machines (3-Axis & 5-Axis) 4. WEDM & SEDM Operator Eligibility: 5–7 years of experience Operation of Wire EDM (WEDM) and Spark EDM (SEDM) Machines 5. CAM Programmer Eligibility: BE with 4 years or Diploma with 7 years of CAM programming experience Experience in 3/5 Axis CNC Vertical Machines Familiarity with Heidenhain, Siemens, and Fanuc controls 6. Inspection Shift Leader Eligibility: 4–5 years of experience Hands-on experience with Coordinate Measuring Machines (CMM) Show more Show less

Posted 2 days ago

Apply

9.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Job title: Big Data Location: Bangalore/Mumbai/Pune/Chennai Candidate Specification Candidate should have 9+ Years in Big Data, JAVA with Scala or Hadoop with Scala. Job Description Design, develop, and maintain scalable big data architectures and systems. Implement data processing pipelines using technologies such as Hadoop, Spark, and Kafka. Optimize data storage and retrieval processes to ensure high performance and reliability. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Perform data modeling, mining, and production processes to support business needs. Ensure data quality, governance, and security across all data systems. Stay updated with the latest trends and advancements in big data technologies. Experience with real-time data processing and stream analytics. Knowledge of advanced analytics and data visualization tools. Knowledge of DevOps practices and tools for continuous integration and deployment Experience in managing big data projects and leading technical teams. Skills Required RoleBig Data - Manager Industry TypeIT/ Computers - Software Functional Area Required Education B E Employment TypeFull Time, Permanent Key Skills BIGDATA HADOOP J A V A SCALA Other Information Job CodeGO/JC/224/2025 Recruiter NameDevikala D Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Description Job Title : Data Architect Location: Chennai Candidate Specification : Candidates must have 5+ Years of experience as a Data Architect, Data/System Analyst or similar role. Job Description Candidates should have good experience in In-depth understanding of database structure principles and gathering and analysing system requirements. Candidates should have experience with data mining and segmentation techniques and they should have good expertise in SQL and Oracle. Candidates should have Knowledge of DB tuning, database structure systems and data mining. Candidates should have experience with business process mapping/documentation, developing use cases, documenting system architecture and creating concept and context diagrams. Candidates should have Hands-on experience with Enterprise Architect or similar tool for UML modelling and Nice to have knowledge of big data tools like Kafka and Spark. Skills Required RoleData Architect-Chennai Industry TypeIT/ Computers - Software Functional Area Required Education Employment TypeFull Time, Permanent Key Skills DATA ARCHITECT DB TUNING KAFKA SQL ORACLE Other Information Job CodeGO/JC/049/2025 Recruiter Name Show more Show less

Posted 2 days ago

Apply

3.0 - 8.0 years

7 - 16 Lacs

Gurugram, Bengaluru

Hybrid

Naukri logo

Role & responsibilities strong development experience in Snowflake, Cloud (AWS, GCP), SCALA, Python, Spark, Big Data and SQL. Work closely with stakeholders, including product managers and designers, to align technical solutions with business goals. Maintain code quality through reviews and make architectural decisions that impact scalability and performance. Performs Root cause Analysis for any critical defects and address technical challenges, optimize workflows, and resolve issues efficiently. Expert in Agile, Waterfall Program/Project Implementation. Manages strategic and tactical relationships with program stakeholders. Successfully executing projects within strict deadlines while managing intense pressure. Good understanding of SDLC (Software Development Life Cycle) Identify potential technical risks and implement mitigation strategies Excellent verbal, written, and interpersonal communication abilities, coupled with strong problem-solving, facilitation, and analytical skills. Cloud Management Activities To have a good understanding of the cloud architecture /containerization and application management on AWS and Kubernetes, to have in depth understanding of the CI CD Pipelines and review tools like Jenkins, Bamboo/DevOps. Skilled at adapting to evolving work conditions and fast-paced challenges. Primary: Snowflake, Cloud (AWS/GCP), SCALA, Python, Spark Big Data and SQL. Preferred candidate profile

Posted 2 days ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Data Science @Dream Sports: Data Science at Dream Sports comprises seasoned data scientists thriving to drive value with data across all our initiatives. The team has developed state-of-the-art solutions for forecasting and optimization, data-driven risk prevention systems, Causal Inference and Recommender Systems to enhance product and user experience. We are a team of Machine Learning Scientists and Research Scientists with a portfolio of projects ranges from production ML systems that we conceptualize, build, support and innovate upon, to longer term research projects with potential game-changing impact for Dream Sports. This is a unique opportunity for highly motivated candidates to work on real-world applications of machine learning in the sports industry, with access to state-of-the-art resources, infrastructure, and data from multiple sources streaming from 250 million users and contributing to our collaboration with Columbia Dream Sports AI Innovation Center. Your Role: Executing clean experiments rigorously against pertinent performance guardrails and analysing performance metrics to infer actionable findings Developing and maintaining services with proactive monitoring and can incorporate best industry practices for optimal service quality and risk mitigation Breaking down complex projects into actionable tasks that adhere to set management practices and ensure stakeholder visibility Managing end-to-end lifecycle of large scale ML projects from data preparation, model training, deployment, monitoring, and upgradation of experiments Leveraging a strong foundation in ML, statistics, and deep learning to adeptly implement research-backed techniques for model development Staying abreast of the best ML practices and developments of the industry to mentor and guide team members Qualifiers: 3-5 years of experience in building, deploying and maintaining ML solutions Extensive experience with Python, Sql, Tensorflow/Pytorch and atleast one distributed data framework (Spark/Ray/Dask ) Working knowledge of Machine Learning, probability & statistics and Deep Learning Fundamentals Experience in designing end to end machine learning systems that work at scale About Dream Sports: Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , a premier sports content & commerce platform and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/ Dream11 is the world’s largest fantasy sports platform with 230 million users playing fantasy cricket, football, basketball & hockey on it. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About the Role We at KrtrimaIQ Cognitive Solutions are looking for a highly experienced and results-driven Senior Data Engineer to design and develop scalable, high-performance data pipelines and solutions in a cloud-native, big data environment. This is a fully remote role, ideal for professionals with deep hands-on experience in PySpark, Google Cloud Platform (GCP), and DataProc. Key Responsibilities:Design, build, and maintain scalable ETL/ELT data pipelines using PySpark Develop and optimize data workflows leveraging GCP DataProc, BigQuery, Cloud Storage, and Cloud Composer Ingest, transform, and integrate structured and unstructured data from diverse sources Collaborate with Data Scientists, Analysts, and cross-functional teams to deliver reliable, real-time data solutions Ensure performance, scalability, and reliability of data platforms Implement best practices for data governance, security, and quality Must-Have Skills:Strong hands-on experience in PySpark and the Apache Spark ecosystem Proficiency in working with GCP services, especially DataProc, BigQuery, Cloud Storage, and Cloud Composer Experience with distributed data processing, ETL design, and data warehouse architecture Strong SQL skills and familiarity with NoSQL data stores Knowledge of CI/CD pipelines, version control (Git), and code review processes Ability to work independently in a remote setup with strong communication skills Preferred Skills:Exposure to real-time data processing tools like Kafka or Pub/Sub Familiarity with Airflow, Terraform, or other orchestration/automation tools Experience with data quality frameworks and observability tools Why Join Us?100% Remote – Work from anywhere High-impact role in a fast-growing AI-driven company Opportunity to work on enterprise-grade, large-scale data systems Collaborative and flexible work culture #SeniorDataEngineer #RemoteJobs #PySpark #GCPJobs #DataProc #BigQuery #CloudDataEngineer #DataEngineeringJobs #ETLPipelines #ApacheSpark #BigDataJobs #GoogleCloudJobs #CloudDataEngineer #HiringNow #DataPipelineEngineer #WorkFromHome #KrtrimaIQ #AIDataEngineering #DataJobsIndia Show more Show less

Posted 2 days ago

Apply

0.0 - 1.0 years

0 Lacs

Delhi, Delhi

On-site

Indeed logo

**** Immediate Joiners are required**** ***Please read the description carefully and then apply*** Note: Female Candidates are preferred. Job Location: Paharganj, Delhi Work Timings : 10.30am to 7.30pm (i/c 1hr lunch break) Leaves : 2 per month (encashable) Salary : 25000 - 35000 per Month. Company/Profile Overview: We are a leading manufacturer of decorative laminates, offering innovative and stylish surface solutions for interior spaces. Our catalogues are a vital tool in showcasing our collections to designers, architects, dealers, and end-users. We are seeking a highly organized and detail-oriented Catalogue Design Coordinator to lead the development and execution of product catalogues for our laminate brand. This role will coordinate the entire catalogue lifecycle with multiple stakeholders—from initial planning, product selection and presentation to design, production, and distribution. Along with catalogue, the coordinator will also handle other design workflows such as exhibitions, collaterals, adverts etc. Roles & Responsibilities - In collaboration with the design agency and the top management - Create visually appealing catalogues showcasing the company’s laminate collections. Plan and manage the catalogue project timeline, ensuring deadlines are met at every stage. Act as the primary point of contact for all stakeholders involved in catalogue creation (designers, printers, writers, production, and marketing teams). Liaise with design agencies and printing vendors to obtain quotes, manage proofs, oversee quality checks, track progress and ensure timely delivery. Analyze sale data of the current SKUs to identify design trends and suggest which designs should be discontinued and which patterns should be focused. Conduct research for competitive analysis pertaining to catalogues of other brands in the market. Stay updated on design trends and market preferences by doing market research to collect information on the latest designs in the veneer / furniture / wallpaper / other such markets (may be required to travel to this end). On the basis of the above, recommend décor papers and texture finishes to add in the product range. Maintain various types of records pertaining to catalogue inventory, consumption, and purchase. Reconcile designer vendor accounts. Coordinate with factory and designer vendor to design all company pamphlets, standees, signages, notepads, stationeries and any other such products / communications. The Candidate should : 1. Have 5-6 yrs of experience in managing interior design / décor / furniture designing activities or experience related to designing in surfaces solution industries (such as laminate, acrylic, pvc, tiles, wallpaper, etc) 2. Have strong organizational skills and attention to detail. 3. Have experience in handling multiple projects at the same time. 4. Be comfortable with written English. Female Candidate will be preferred for this role. Educational Qualification - Min. college graduate (applicants with design related degrees / colleges will be preferred) . Experience with using Microsoft Office (Power point, Word, Excel etc.) required. Designing experience with using Photoshop, Spark, Corel DRAW will be given advantage. How to Apply: Please send your updated resume and cover letter to madhur@adrianaa.com or You can send msg on this number: +918010768617 (WhatsApp only) Note: Only candidates who can join immediately will be considered. Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹35,000.00 per month Schedule: Day shift Application Question(s): Which designing software you know better? - Photoshop, Spark, Corel DRAW . What is your In Hand Salary per month? Are you a Immediate Joiner? Do you have knowledge or working experience in advance excel, PowerPoint etc? Experience: Catalogue Design Coordinator : 1 year (Required) Location: Delhi, Delhi (Required) Work Location: In person

Posted 2 days ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Job Title: IT Support Consultant Location: Chennai Office (for projects across India) "Kindly Note: This is for a short-term contractual role of up to 6 months to 1 year only." About Varahe Analytics: Varahe Analytics is one of India’s premier integrated political consulting firms specializing in building data-driven 360-degree election campaigns. We help our clients with strategic advice and implementation, combining data-backed insights and in-depth ground intelligence into a holistic electoral campaign. We are passionate about our democracy and the politics that shape our world. We draw on some of the sharpest minds from distinguished institutions and diverse professional backgrounds to help us achieve our goal of building electoral strategies that spark conversations, effect change, and help shape electoral and legislative ecosystems in our country. Key Responsibilities: A. Hardware and Software Troubleshooting: 1. Respond to and resolve technical issues users report through various channels (phone, email, chat). 2. Diagnose and troubleshoot hardware and software problems. 3. Install, configure, and update OS software and applications. 4. Provide remote assistance to users for technical issues and utilize remote desktop tools to troubleshoot and resolve problems. 5. Manage and troubleshoot network devices, ensuring smooth connectivity. 6. Administer firewall settings to enhance network security . B. Documentation and Reporting: 1. Maintain detailed records of hardware and software interventions using Google Docs and Sheets. 2. Generate reports on recurring issues, resolutions, and preventive measures. C. End-User Support: 1. Offer timely and effective support to end-users, addressing their IT-related concerns with login issues, password resets, and account management . 2. Educate users on primary computer usage and best practices on IT policies and procedures. D. Vendor Coordination: 1. Liaise with vendors for hardware and software procurement and support. 2. Manage relationships with external service providers for network and software-related services. E. API Utilization: 1. Leverage knowledge of using platform APIs to integrate and streamline IT processes. 2. Develop and implement solutions that enhance effciency through API interactions. Required Skills and Qualifications: 1. Bachelor’s degree in Information Technology, Computer Science, or related field with 2+ years experience. 2. Proven experience in IT asset management, software and hardware support, and vendor coordination. 3. Excellent organizational and project management skills. 4. Ability to work collaboratively with cross-functional teams and vendors. "Note: Tamil language is a must-have requirement for this role." If you're an early career professionals looking for a high-impact challenge, interested in joining a team of like-minded and motivated individuals who think strategically, act decisively, and get things done, drop in an email at openings@varaheanalytics.com Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: AI/ML Engineer Location: Pune, India About the Role: We’re looking for highly analytical, technically strong Artificial Intelligence/Machine Learning Engineers to help build scalable, data-driven systems in the digital marketing space. You'll work alongside a top-tier team on impactful solutions affecting billions of users globally. Experience Required: 3 – 7 years Key Responsibilities: Collaborate across Data Science, Ops, and Engineering to tackle large-scale ML challenges. Build and manage robust ML pipelines (ETL, training, deployment) in real-time environments. Optimize models and infrastructure for performance and scalability. Research and implement best practices in ML systems and lifecycle management. Deploy deep learning models using high-performance computing environments. Integrate ML frameworks into cloud/distributed systems. Required Skills: 2+ years of Python development in a programming-intensive role. 1+ year of hands-on ML experience (e.g., Classification, Clustering, Optimization, Deep Learning). 2+ years working with distributed frameworks (Spark, Hadoop, Kubernetes). 2+ years with ML tools such as TensorFlow, PyTorch, Keras, MLlib. 2+ years experience with cloud platforms (AWS, Azure, GCP). Excellent communication skills. Preferred: Prior experience in AdTech or digital advertising platforms (DSP, Ad Exchange, SSP). Education: M.Tech or Ph.D. in Computer Science, Software Engineering, Mathematics, or a related discipline. Why Apply? Join a fast-moving team working on the forefront of AI in advertising. Build technologies that impact billions of users worldwide. Shape the future of programmatic and performance advertising. Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Software Dev Engineer II Do you love building intelligent, configurable systems using diverse set of state-of-the-art technologies? Are you interested in building a self-service solutions to help security analysts monitor and act on cyber security events for hundreds and thousands of sensors? Are you interested in driving the governance and compliance charter for the whole company? Want to join a team that has a great reputation for addressing issues with Cyber Security? The Cyber Security development team in Expedia helps secure the company by providing solutions majorly for Cybersecurity Incident detection and response (i.e. cyber-attacks), Security Vulnerability Management, Physical Security and Security Compliance & Governance. The team has developed an in-house security data platform (over AWS Cloud infrastructure) to help the Cyber Response, Physical Security, Governance, Risk and Compliance teams to perform their security operations with efficiency and speed. You will build highly available systems that scale to hundreds and thousands of security events. You will be an important part of a growing team using the latest technology to protect our business, our customers, our business partners, and improve our customer experience, empowering the whole EG Security pillar. What You’ll Do Design, and develop new platform services to expand capabilities of our Security Platform Create resilient, fault tolerant, highly available systems Own and deliver tested and optimized high-performance code for a distributed messaging, event, and vulnerability management environment. Participate in the resolution of production issues and lead efforts toward solutions. Contribute to vigilantly rewriting, refactoring, and perfecting code. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build tools that utilize the data pipeline to deliver meaningful insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with partners including the Architecture, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Technologies what we use: Java, Python, Spark, AWS, Azure, Kafka, Airflow, MySQL, React, MongoDB, Redshift, Grafana, ServiceNow, Tableau. Who Are You Bachelor's in computer science or related technical fields; or Equivalent related professional experience 2+ years of experience in software development (SDLC), preferably on Service-Oriented Architecture (SOA) Coding proficiency in at least one modern programming language (Java preferably, Scala, Python etc.) and exposure to RBDMS/NoSQL solutions Strong object oriented programming concepts and background in data structures and algorithms Experience with automated testing, including unit, functional, integration & performance/load testing Experience of using cloud services (e.g. AWS, Azure, etc.) Experience working with Agile/Scrum methodologies Ability to thrive in a dynamic, collaborative and fast paced environment Strong interpersonal skills as well as strong problem-solving and analytical skills Experience with Security tools/applications is a plus Experience with eCommerce industry is a plus Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Greetings from TCS! TCS is hiring for Big Data (PySpark & Scala) Location: - Chennai/Pune/Mumbai Desired Experience Range: 5 + Years Must-Have • PySpark • Hive Good-to-Have • Spark • HBase • DQ tool • Agile Scrum experience • Exposure in data ingestion from disparate sources onto Big Data platform Thanks Anshika Show more Show less

Posted 2 days ago

Apply

Exploring Spark Jobs in India

The demand for professionals with expertise in Spark is on the rise in India. Spark, an open-source distributed computing system, is widely used for big data processing and analytics. Job seekers in India looking to explore opportunities in Spark can find a variety of roles in different industries.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities have a high concentration of tech companies and startups actively hiring for Spark roles.

Average Salary Range

The average salary range for Spark professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Salaries may vary based on the company, location, and specific job requirements.

Career Path

In the field of Spark, a typical career progression may look like: - Junior Developer - Senior Developer - Tech Lead - Architect

Advancing in this career path often requires gaining experience, acquiring additional skills, and taking on more responsibilities.

Related Skills

Apart from proficiency in Spark, professionals in this field are often expected to have knowledge or experience in: - Hadoop - Java or Scala programming - Data processing and analytics - SQL databases

Having a combination of these skills can make a candidate more competitive in the job market.

Interview Questions

  • What is Apache Spark and how is it different from Hadoop? (basic)
  • Explain the difference between RDD, DataFrame, and Dataset in Spark. (medium)
  • How does Spark handle fault tolerance? (medium)
  • What is lazy evaluation in Spark? (basic)
  • Explain the concept of transformations and actions in Spark. (basic)
  • What are the different deployment modes in Spark? (medium)
  • How can you optimize the performance of a Spark job? (advanced)
  • What is the role of a Spark executor? (medium)
  • How does Spark handle memory management? (medium)
  • Explain the Spark shuffle operation. (medium)
  • What are the different types of joins in Spark? (medium)
  • How can you debug a Spark application? (medium)
  • Explain the concept of checkpointing in Spark. (medium)
  • What is lineage in Spark? (basic)
  • How can you monitor and manage a Spark application? (medium)
  • What is the significance of the Spark Driver in a Spark application? (medium)
  • How does Spark SQL differ from traditional SQL? (medium)
  • Explain the concept of broadcast variables in Spark. (medium)
  • What is the purpose of the SparkContext in Spark? (basic)
  • How does Spark handle data partitioning? (medium)
  • Explain the concept of window functions in Spark SQL. (advanced)
  • How can you handle skewed data in Spark? (advanced)
  • What is the use of accumulators in Spark? (advanced)
  • How can you schedule Spark jobs using Apache Oozie? (advanced)
  • Explain the process of Spark job submission and execution. (basic)

Closing Remark

As you explore opportunities in Spark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this growing field and advance your career in the tech industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies