Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
12.0 years
0 Lacs
Delhi, India
Remote
Skills: IB Mathematics Curriculum, Classroom Management, Lesson Planning, Online teaching, Mathematics, Student Engagement, IB Maths Faculty (MYP / DP) Location: Gurgaon (1st month onsite) then Work From Home Salary: 68 LPA 6 days/week | Immediate Joiners Preferred Hey Math Whiz, Ready to Teach Without the Boring Bits? If you believe teaching math should be more aha! than ugh!, welcome to your dream job. At Sparkl , we dont just solve equations we spark curiosity, build logic, and change how learning feels. Were looking for a young, sharp IB Maths Educator who can handle both MYP & DP grades with confidence and creativity. Someone who knows that x is not just a variable its a whole vibe. What You'll Be Doing Teach IB Math (MYP or DP) with clarity, confidence, and coolness Help students break down complex problems and love the process Use Sparkls resources + your own flair to design interactive lessons Join us in Gurgaon for the 1st month of onboarding, then switch to WFH Who You Are 12 years of teaching/tutoring experience in IB, IGCSE, or similar Graduate/Postgraduate in Math or related field Great communicator with excellent English Calm, curious, collaborative you love teaching and it shows Why Youll Love Sparkl Gen Z-friendly, mentor-led work culture Personalized learning platform with real impact Young team, real growth, and no outdated teaching drama Apply now and lets turn your math mojo into a movement. Show more Show less
Posted 1 day ago
12.0 years
0 Lacs
Delhi, India
Remote
Skills: Physics, IB Physics, Teacher, Storytelling, Classroom management, Online teaching, IB Curriculum, IB Physics Faculty (MYP + DP) Location: Gurgaon (1st month onsite) then Work From Home Salary: 78 LPA 6 days/week | Immediate Joiners Preferred Physics = Fun. Who Knew? (You Did.) If you can turn Newtons laws into a Netflix-worthy explanation, and you genuinely love helping teens get the point of Physics then we want you at Sparkl . Were looking for a young IB Physics Educator to teach both MYP & DP , someone who can go from talking atoms to astrophysics and make it fun. The Role Includes Teaching IB Physics to students in Grades 612 (MYP & DP) Creating energy in the virtual classroom minus the resistance Using experiments, analogies, and storytelling to explain tough concepts Starting your journey with 1 month of training in Gurgaon, then fully remote You Should Be Someone Who Has 12 years of teaching or tutoring experience (IB/IGCSE a plus) Holds a graduate/postgraduate degree in Physics Communicates clearly, creatively, and confidently in English Cares deeply about student learning (not just the syllabus) Why Work With Sparkl? Young and fun team, serious about learning Teach ambitious, globally-minded students Mentorship and training that actually helps you grow Work-from-home flexibility after initial onboarding Dont just teach Physics spark a love for it. Apply today! Show more Show less
Posted 1 day ago
0 years
0 Lacs
Delhi, India
On-site
Key Responsibilities Design, build, and maintain scalable, reliable, and efficient data pipelines to support data analytics and business intelligence needs. Optimize and automate data workflows, enhancing the efficiency of data processing and reducing latency. Implement and maintain data storage solutions, ensuring that data is organized, secure, and readily accessible. Provide expertise in ETL processes, data wrangling, and data transformation techniques. Collaborate with technology teams to ensure that data engineering solutions align with overall business goals. Stay current with industry best practices and emerging technologies in data engineering, implementing improvements as : Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a related field. Experience with Agile methodologies and software development project : Proven experience in data engineering, with expertise in building and managing data pipelines, ETL processes, and data warehousing. Proficiency in SQL, Python, and other programming languages commonly used in data engineering. Experience with cloud platforms such as AWS, Azure, or Google Cloud, and familiarity with cloud-based data storage and processing tools (e.g., S3, Redshift, BigQuery, etc.). Good to have familiarity with big data technologies (e.g., Hadoop, Spark) and real-time data processing. Strong understanding of database management systems and data modeling techniques. Experience with BI tools like Tableau, Power BI along with ETL tools like Alteryx, or similar, and ability to work closely with analytics teams. High attention to detail and commitment to data quality and accuracy. Ability to work independently and as part of a team, with strong collaboration skills. Highly adaptive and comfortable working within a complex, fast-paced environment. (ref:hirist.tech) Show more Show less
Posted 1 day ago
0 years
0 Lacs
Delhi, India
On-site
What Youll Do Architect and scale modern data infrastructure: ingestion, transformation, warehousing, and access Define and drive enterprise data strategygovernance, quality, security, and lifecycle management Design scalable data platforms that support both operational insights and ML/AI applications Translate complex business requirements into robust, modular data systems Lead cross-functional teams of engineers, analysts, and developers on large-scale data initiatives Evaluate and implement best-in-class tools for orchestration, warehousing, and metadata management Establish technical standards and best practices for data engineering at scale Spearhead integration efforts to unify data across legacy and modern platforms What You Bring Experience in data engineering, architecture, or backend systems Strong grasp of system design, distributed data platforms, and scalable infrastructure Deep hands-on experience with cloud platforms (AWS, Azure, or GCP) and tools like Redshift, BigQuery, Snowflake, S3, Lambda Expertise in data modeling (OLTP/OLAP), ETL pipelines, and data warehousing Experience with big data ecosystems: Kafka, Spark, Hive, Presto Solid understanding of data governance, security, and compliance frameworks Proven track record of technical leadership and mentoring Strong collaboration and communication skills to align tech with business Bachelors or Masters in Computer Science, Data Engineering, or a related field Nice To Have (Your Edge) Experience with real-time data streaming and event-driven architectures Exposure to MLOps and model deployment pipelines Familiarity with data DevOps and Infra as Code (Terraform, CloudFormation, CI/CD pipelines) (ref:hirist.tech) Show more Show less
Posted 1 day ago
68.0 years
0 Lacs
Greater Kolkata Area
On-site
Role Overview We are looking for a highly skilled and motivated Senior Data Scientist to join our team. In this role, you will design, develop, and implement advanced data models and algorithms that drive strategic decision-making across the organization. You will work closely with product, engineering, and business teams to uncover insights and deliver data-driven solutions that enhance the performance and scalability of our products and services. Key Responsibilities Develop, deploy, and maintain machine learning models and advanced analytics pipelines. Analyze complex datasets to identify trends, patterns, and actionable insights. Collaborate with cross-functional teams (Engineering, Product, Marketing) to define and execute data science strategies. Build and improve predictive models using supervised and unsupervised learning techniques. Translate business problems into data science projects with measurable impact. Design and conduct experiments, A/B tests, and statistical analyses to validate hypotheses and guide product development. Create dashboards and visualizations to communicate findings to technical and non-technical stakeholders. Stay up-to-date with industry trends, best practices, and emerging technologies in data science and machine learning. Ensure data quality and governance standards are maintained across all projects. Required Skills And Qualifications 68 years of hands-on experience in Data Science, Machine Learning, and Statistical Modeling. Proficiency in programming languages such as Python, R, and SQL. Strong foundation in data analysis, data wrangling, and feature engineering. Expertise in building and deploying models using tools such as scikit-learn, TensorFlow, PyTorch, or similar frameworks. Experience with big data platforms (e.g., Spark, Hadoop) and cloud services (AWS, GCP, Azure) is a plus. Deep understanding of statistical techniques including hypothesis testing, regression, and Bayesian methods. Excellent communication skills with the ability to explain complex technical concepts to non-technical audiences. Proven track record of working on cross-functional projects and delivering data-driven solutions that impact business outcomes. Master's or Ph.D. in Data Science, Computer Science, Statistics, Mathematics, or a related field. Experience with NLP, computer vision, or deep learning techniques. Knowledge of data engineering principles and ETL processes. Familiarity with version control (Git), agile methodologies, and CI/CD pipelines. Contributions to open-source data science projects or publications in relevant fields. (ref:hirist.tech) Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Greater Kolkata Area
Remote
Omni's team is passionate about Commerce and Digital Transformation. We've been successfully delivering Commerce solutions for clients across North America, Europe, Asia, and Australia. The team has experience executing and delivering projects in B2B and B2C solutions. Job Description This is a remote position. We are seeking a Senior Data Engineer to architect and build robust, scalable, and efficient data systems that power AI and Analytics solutions. You will design end-to-end data pipelines, optimize data storage, and ensure seamless data availability for machine learning and business analytics use cases. This role demands deep engineering excellence balancing performance, reliability, security, and cost to support real-world AI applications. Key Responsibilities Architect, design, and implement high-throughput ETL/ELT pipelines for batch and real-time data processing. Build cloud-native data platforms : data lakes, data warehouses, feature stores. Work with structured, semi-structured, and unstructured data at petabyte scale. Optimize data pipelines for latency, throughput, cost-efficiency, and fault tolerance. Implement data governance, lineage, quality checks, and metadata management. Collaborate closely with Data Scientists and ML Engineers to prepare data pipelines for model training and inference. Implement streaming data architectures using Kafka, Spark Streaming, or AWS Kinesis. Automate infrastructure deployment using Terraform, CloudFormation, or Kubernetes operators. Requirements 7+ years in Data Engineering, Big Data, or Cloud Data Platform roles. Strong proficiency in Python and SQL. Deep expertise in distributed data systems (Spark, Hive, Presto, Dask). Cloud-native engineering experience (AWS, GCP, Azure) : BigQuery, Redshift, EMR, Databricks, etc. Experience designing event-driven architectures and streaming systems (Kafka, Pub/Sub, Flink). Strong background in data modeling (star schema, OLAP cubes, graph databases). Proven experience with data security, encryption, compliance standards (e.g., GDPR, HIPAA). Preferred Skills Experience in MLOps enablement : creating feature stores, versioned datasets. Familiarity with real-time analytics platforms (Clickhouse, Apache Pinot). Exposure to data observability tools like Monte Carlo, Databand, or similar. Passionate about building high-scale, resilient, and secure data systems. Excited to support AI/ML innovation with state-of-the-art data infrastructure. Obsessed with automation, scalability, and best engineering practices. (ref:hirist.tech) Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Description Responsibilities : Architect and design end-to-end data solutions on Cloud Platform, focusing on data warehousing and big data platforms. Collaborate with clients, developers, and architecture teams to understand requirements and translate them into effective data solutions. Develop high-level and detailed data architecture and design documentation. Implement data management and data governance strategies, ensuring compliance with industry standards. Architect both batch and real-time data solutions, leveraging cloud native services and technologies. Design and manage data pipeline processes for historic data migration and data integration. Collaborate with business analysts to understand domain data requirements and incorporate them into the design deliverables. Drive innovation in data analytics by leveraging cutting-edge technologies and methodologies. Demonstrate excellent verbal and written communication skills to communicate complex ideas and concepts effectively. Stay updated on the latest advancements in Data Analytics, data architecture, and data management techniques. Requirements Minimum of 5 years of experience in a Data Architect role, supporting warehouse and Cloud data platforms/environments (Azure). Extensive Experience with common Azure services such as ADLS, Synapse, Databricks, Azure SQL etc. Experience on Azure services such as ADF, Polybase, Azure Stream Analytics Proven expertise in Databricks architecture, Delta Lake, Delta sharing, Unity Catalog, Data pipelines, and Spark tuning. Strong knowledge of Power BI architecture, DAX, and dashboard optimization. In-depth experience with SQL, Python, and/or PySpark. Hands-on knowledge of data governance, lineage, and cataloging tools such as Azure Purview and Unity Catalog. Experience in implementing CI/CD pipelines for data and BI components (e.g., using DevOps or GitHub). Experience on building symantec modeling in Power BI. Strong knowledge of Power BI architecture, DAX, and dashboard optimization. Strong expertise in data exploration using SQL and a deep understanding of data relationships. Extensive knowledge and implementation experience in data management, governance, and security frameworks. Proven experience in creating high-level and detailed data architecture and design documentation. Strong aptitude for business analysis to understand domain data requirements. Proficiency in Data Modeling using any Modeling tool for Conceptual, Logical, and Physical models is preferred Hands-on experience with architecting end-to-end data solutions for both batch and real-time designs. Ability to collaborate effectively with clients, developers, and architecture teams to implement enterprise-level data solutions. Familiarity with Data Fabric and Data Mesh architecture is a plus. Excellent verbal and written communication skills. (ref:hirist.tech) Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Key Responsibilities Data Science Leadership : Utilize in-depth knowledge of data science and data analytics to architect and drive strategic initiatives across departments. Stakeholder Collaboration : Work closely with cross-functional teams to define and implement data strategies aligned with business objectives. Model Development : Design, build, and deploy predictive models and machine learning algorithms to solve complex business problems and uncover actionable insights. Integration and Implementation : Collaborate with IT and domain experts to ensure smooth integration of data science models into existing business workflows and systems. Innovation and Optimization : Continuously evaluate new data tools, methodologies, and technologies to enhance analytical capabilities and operational efficiency. Data Governance : Promote data quality, consistency, and security standards across the organization. Required Qualifications Bachelors or Masters Degree in Economics, Statistics, Data Science, or a related field. A minimum of 8 years of relevant experience in data analysis, data science, or analytics roles. At least 3 years of direct experience as a Data Scientist, preferably in enterprise or analytics lab environments. Possession of at least one recognized data science certification, such as : Certified Analytics Professional (CAP) Google Professional Data Engineer Proficiency in data visualization and storytelling tools and libraries, such as Matplotlib, Seaborn, and Tableau. Strong foundation in statistical modeling and risk analytics, with proven experience building and validating such models. Preferred Skills And Attributes Strong programming skills in Python, R, or similar languages. Experience with cloud-based analytics platforms (AWS, GCP, or Azure). Familiarity with data engineering concepts and tools (e.g., SQL, Spark, Hadoop). Excellent problem-solving, communication, and stakeholder engagement skills. Ability to manage multiple projects and mentor junior team members. (ref:hirist.tech) Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
About Sleek Through proprietary software and AI, along with a focus on customer delight, Sleek makes the back-office easy for micro SMEs. We give Entrepreneurs time back to focus on what they love doing growing their business and being with customers. With a surging number of Entrepreneurs globally, we are innovating in a highly lucrative space. We Operate 3 Business Segments Corporate Secretary : Automating the company incorporation, secretarial, filing, Nominee Director, mailroom and immigration processes via custom online robots and SleekSign. We are the market leaders in Singapore with : 5% market share of all new business incorporations. Accounting & Bookkeeping : Redefining what it means to do Accounting, Bookkeeping, Tax and Payroll thanks to our proprietary SleekBooks ledger, AI tools and exceptional customer service. FinTech payments : Overcoming a key challenge for Entrepreneurs by offering digital banking services to new businesses. Sleek launched in 2017 and now has around 15,000 customers across our offices in Singapore, Hong Kong, Australia and the UK. We have around 450 staff with an intact startup mindset. We have achieved >70% compound annual growth in Revenue over the last 5 years and as a result have been recognised by The Financial Times, The Straits Times, Forbes and LinkedIn as one of the fastest growing companies in Asia. Role Backed by world-class investors, we are on track to be one of the few cash flow positive, tech-enabled unicorns based out of The Role : We are looking for an experienced Senior Data Engineer to join our growing team. As a key member of our data team, you will design, build, and maintain scalable data pipelines and infrastructure to enable data-driven decision-making across the organization. This role is ideal for a proactive, detail-oriented individual passionate about optimizing and leveraging data for impactful business : Work closely with cross-functional teams to translate our business vision into impactful data solutions. Drive the alignment of data architecture requirements with strategic goals, ensuring each solution not only meets analytical needs but also advances our core objectives. 3, Be pivotal in bridging the gap between business insights and technical execution by tackling complex challenges in data integration, modeling, and security, and by setting the stage for exceptional data performance and insights. Shape the data roadmap, influence design decisions, and empower our team to deliver innovative, scalable, high-quality data solutions every : Achieve and maintain a data accuracy rate of at least 99% for all business-critical dashboards by start of day (accounting for corrections and job failures), with a 24-business hour detection of error and 5-day correction SLA. 95% of data on dashboards originates from technical data pipelines to mitigate data drift. Set up strategic dashboards based on Business Needs which are robust, scalable, easy and quick to operate and maintain. Reduce costs of data warehousing and pipelines by 30%, then maintaining costs as data needs grow. Achieve 50 eNPS on data services (e.g. dashboards) from key business : Data Pipeline Development : Design, implement, and optimize robust, scalable ETL/ELT pipelines to process large volumes of structured and unstructured data. Data Modeling : Develop and maintain conceptual, logical, and physical data models to support analytics and reporting requirements. Infrastructure Management : Architect, deploy, and maintain cloud-based data platforms (e.g. , AWS, GCP). Collaboration : Work closely with data analysts, business owners, and stakeholders to understand data requirements and deliver reliable solutions, including designing and implementing robust, efficient and scalable data visualization on Tableau or LookerStudio. Data Governance : Ensure data quality, consistency, and security through robust validation and monitoring frameworks. Performance Optimization : Monitor, troubleshoot, and optimize the performance of data systems and pipelines. Innovation : Stay up to date with the latest industry trends and emerging technologies to continuously improve data engineering & Qualifications : Experience : 5+ years in data engineering, software engineering, or a related field. Technical Proficiency Proficiency in working with relational databases (e.g. , PostgreSQL, MySQL) and NoSQL databases (e.g. , MongoDB, Cassandra). Familiarity with big data frameworks like Hadoop, Hive, Spark, Airflow, BigQuery, etc. Strong expertise in programming languages such as Python, NodeJS, SQL etc. Cloud Platforms : Advanced knowledge of cloud platforms (AWS, or GCP) and their associated data services. Data Warehousing : Expertise in modern data warehouses like BigQuery, Snowflake or Redshift, etc. Tools & Frameworks : Expertise in version control systems (e.g. , Git), CI/CD, JIRA pipelines. Big Data Ecosystems / BI : BigQuery, Tableau, LookerStudio. Industry Domain Knowledge : Google Analytics (GA), Hubspot, Accounting/Compliance etc. Soft Skills : Excellent problem-solving abilities, attention to detail, and strong communication Qualifications : Degree in Computer Science, Engineering, or a related field. Experience with real-time data streaming technologies (e.g. , Kafka, Kinesis). Familiarity with machine learning pipelines and tools. Knowledge of data security best practices and regulatory The Interview Process : The successful candidate will participate in the below interview stages (note that the order might be different to what you read below). We anticipate the process to last no more than 3 weeks from start to finish. Whether the interviews are held over video call or in person will depend on your location and the role. Case study. A : 60 minute chat with the Data Analyst, where they will give you some real-life challenges that this role faces, and will ask for your approach to solving them. Career deep dive. A : 60 minute chat with the Hiring Manager (COO). They'll discuss your last 1-2 roles to understand your experience in more detail. Behavioural fit assessment. A : 60 minute chat with our Head of HR or Head of Hiring, where they will dive into some of your recent work situations to understand how you think and work. Offer + reference interviews. We'll Make a Non-binding Offer Verbally Or Over Email, Followed By a Couple Of Short Phone Or Video Calls With References That You Provide To For Background Screening Please be aware that Sleek is a regulated entity and as such is required to perform different levels of background checks on staff depending on their role. This may include using external vendors to verify the below : Your education. Any criminal history. Any political exposure. Any bankruptcy or adverse credit history. We will ask for your consent before conducting these checks. Depending on your role at Sleek, an adverse result on one of these checks may prohibit you from passing probation. (ref:hirist.tech) Show more Show less
Posted 1 day ago
7.0 - 10.0 years
0 Lacs
Greater Kolkata Area
Remote
Job Title : Senior Data Scientist (Contract | Remote) Location : Remote Experience Required : 7 - 10 Years About The Role We are seeking a highly experienced Senior Data Scientist to join our team on a contract basis. This role is ideal for someone who excels in predictive analytics and has strong hands-on experience with Databricks and PySpark. You will play a key role in building and deploying scalable machine learning models, with a focus on regression, classification, and time-series forecasting. Key Responsibilities Design, build, and deploy predictive models using regression, classification, and time-series techniques. Develop and maintain scalable data pipelines using Databricks and PySpark. Leverage MLflow for experiment tracking and model versioning. Utilize Delta Lake for efficient data storage and version control. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Implement and manage CI/CD pipelines for model deployment. Work with cloud platforms such as Azure or AWS to develop and deploy ML solutions. Required Skills & Qualifications Minimum 7 years of experience in predictive analytics and machine learning. Strong expertise in Databricks, PySpark, MLflow, and Delta Lake. Proficiency in Python, Spark MLlib, and AutoML frameworks. Experience working with CI/CD pipelines for model deployment. Familiarity with Azure or AWS cloud services. Excellent problem-solving skills and ability to work : Prior experience in the Life Insurance or Property & Casualty (P&C) insurance domain. (ref:hirist.tech) Show more Show less
Posted 1 day ago
2.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About The Job We're Hiring : DevOps Engineer (2-5 Years Exp.) | Noida Location : Sector 158, Noida | On-site | Full-time Industry : Internet News | Media | Digital Are you a seasoned DevOps Engineer with 810 years of experience, ready to take ownership of large-scale infrastructure and cloud deployments? We're looking for a hands-on DevOps expert with strong experience in Google Cloud Platform (GCP) to lead our CI/CD pipelines, automate deployments, and manage a microservices-based infrastructure at scale. What Youll Do Own and manage CI/CD pipelines and infrastructure end-to-end. Architect and deploy scalable solutions on GCP (preferred). Streamline release cycles in coordination with QA, product, and engineering teams. Build containerized apps using Docker and manage them via Kubernetes. Use Terraform, Ansible, or equivalent tools for Infrastructure-as-Code (IAC). Monitor system performance and lead troubleshooting during production issues. Drive automation across infrastructure, monitoring, and alerts. Ensure microservices run securely and reliably. What Were Looking For 2 to 5 years of experience in DevOps or similar roles. Strong GCP (Google Cloud Platform) experience is mandatory. Hands-on with Docker, Kubernetes, Jenkins/GitLab CI/CD, Git/GitHub. Solid scripting knowledge (Shell, Python, etc. Familiarity with Node.js/React deployments. Experience with SQL/NoSQL DBs and tools like Elasticsearch, Spark, or Presto. Good understanding of secure development and InfoSec standards. Immediate joiners preferred (ref:hirist.tech) Show more Show less
Posted 1 day ago
2.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
DevOps Engineer (2 - 5 Years Exp.) | Noida Location : Sector 158, Noida | On-site Description : Are you a seasoned DevOps Engineer with 2-5 years of experience, ready to take ownership of large-scale infrastructure and cloud deployments? We're looking for a hands-on DevOps expert with strong experience in Google Cloud Platform (GCP) to lead our CI/CD pipelines, automate deployments, and manage a microservices-based infrastructure at scale. What Youll Do Own and manage CI/CD pipelines and infrastructure end-to-end. Architect and deploy scalable solutions on GCP (preferred). Streamline release cycles in coordination with QA, product, and engineering teams. Build containerized apps using Docker and manage them via Kubernetes. Use Terraform, Ansible, or equivalent tools for Infrastructure-as-Code (IAC). Monitor system performance and lead troubleshooting during production issues. Drive automation across infrastructure, monitoring, and alerts. Ensure microservices run securely and reliably. What Were Looking For 2 - 5 years of experience in DevOps or similar roles. Strong GCP (Google Cloud Platform) experience is mandatory. Hands-on with Docker, Kubernetes, Jenkins/GitLab CI/CD, Git/GitHub. Solid scripting knowledge (Shell, Python, etc. Familiarity with Node.js/React deployments. Experience with SQL/NoSQL DBs and tools like Elasticsearch, Spark, or Presto. Good understanding of secure development and InfoSec standards. Immediate joiners preferred (ref:hirist.tech) Show more Show less
Posted 1 day ago
10.0 - 15.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Job Title : Director AI Automation & Data Sciences Experience Required : 10- 15 Years Industry : Legal Technology / Cybersecurity / Data Science Department : Technology & Innovation About The Role We are seeking an exceptional Director AI Automation & Data Sciences to lead the innovation engine behind our Managed Document Review and Cyber Incident Response services. This is a senior leadership role where youll leverage advanced AI and data science to drive automation, scalability, and differentiation in service delivery. If you are a visionary leader who thrives at the intersection of technology and operations, this is your opportunity to make a global impact. Why Join Us Cutting-edge AI & Data Science technologies at your fingertips Globally recognized Cyber Incident Response Team Prestigious clientele of Fortune 500 companies and industry leaders Award-winning, inspirational workspaces Transparent, inclusive, and growth-driven culture Industry-best compensation that recognizes excellence Key Responsibilities (KRAs) Lead and scale AI & data science initiatives across Document Review and Incident Response programs Architect intelligent automation workflows to streamline legal review, anomaly detection, and threat analytics Drive end-to-end deployment of ML and NLP models into production environments Identify and implement AI use cases that deliver measurable business outcomes Collaborate with cross-functional teams including Legal Tech, Cybersecurity, Product, and Engineering Manage and mentor a high-performing team of data scientists, ML engineers, and automation specialists Evaluate and integrate third-party AI platforms and open-source tools for accelerated innovation Ensure AI models comply with privacy, compliance, and ethical AI principles Define and monitor key metrics to track model performance and automation ROI Stay abreast of emerging trends in generative AI, LLMs, and cybersecurity analytics Technical Skills & Tools Proficiency in Python, R, or Scala for data science and automation scripting Expertise in Machine Learning, Deep Learning, and NLP techniques Hands-on experience with LLMs, Transformer models, and Vector Databases Strong knowledge of Data Engineering pipelines ETL, data lakes, and real-time analytics Familiarity with Cyber Threat Intelligence, anomaly detection, and event correlation Experience with platforms like AWS SageMaker, Azure ML, Databricks, HuggingFace Advanced use of TensorFlow, PyTorch, spaCy, Scikit-learn, or similar frameworks Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for ML Ops Strong command of SQL, NoSQL, and big data tools (Spark, Kafka) Qualifications Bachelors or Masters in Computer Science, Data Science, AI, or a related field 10- 15 years of progressive experience in AI, Data Science, or Automation Proven leadership of cross-functional technology teams in high-growth environments Experience working in LegalTech, Cybersecurity, or related high-compliance industries preferred (ref:hirist.tech) Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Thane, Maharashtra, India
On-site
About The Company Implement Architecture and design from definition phase to go-live the Role : Work with the business analyst and SMEs to understand the current landscape priorities. Define conceptual and low-level model of using AI technology. Review design to make sure design is aligned with Architecture. Handson development of AI lead solution. Implement entire data pipeline of data crawling, ETL, creating Fact Tables, Data quality management etc. Integrate with multiple system using API or Web Services or data exchange mechanism. Build interfaces that gather data from various data sources such as: flat files, data extracts & incoming feeds from various data sources as well as directly interfacing with enterprise applications. Ensure that the solution is scalable, maintainable, and meet the best practices for security, performance and data management. Owning research assignments and development. Leading, developing and assisting developers & other team members. Collaborate, validate, and provide frequent updates to internal stakeholders throughout the project. Define and deliver against the solution benefits statement. Positively and constructively engage with clients and operations teams efforts where : Implement Architecture and design from definition phase to go-live phase. Work with the business analyst and SMEs to understand the current landscape priorities. Define conceptual and low-level model of using AI technology. Review design to make sure design is aligned with Architecture. Handson development of AI lead solution. Implement entire data pipeline of data crawling, ETL, creating Fact Tables, Data quality management etc. Integrate with multiple system using API or Web Services or data exchange mechanism. Build interfaces that gather data from various data sources such as: flat files, data extracts & incoming feeds from various data sources as well as directly interfacing with enterprise applications. Ensure that the solution is scalable, maintainable, and meet the best practices for security, performance and data management. Owning research assignments and development. Leading, developing and assisting developers & other team members. Collaborate, validate, and provide frequent updates to internal stakeholders throughout the project. Define and deliver against the solution benefits statement. Positively and constructively engage with clients and operations teams efforts where : A Bachelor's degree in Computer Science, Software Engineering, or a related Skills : Minimum 5 years of IT experience including 3+ years of experience as Full stack developer preferably using Python skills 2+ years of hands-on experience in Azure Data factory, Azure Databricks / Spark (familiarity with fabric), Azure Data Lake storage (Gen1/Gen2), Azure Synapse/SQL DW Expertise in designing/deploying data pipeline, from data crawling, ETL, Data warehousing, data applications on Azure Experienced in AI technology including: Machine Learning algorithms, Natural Language Processing, Deep Learning, Image Recognition, Speech Recognition etc. Proficient in programming languages like Python (Full Stack exposure) Proficient in dealing with all the layers in solution; multi-channel presentation, business logic in middleware, data access layer, RDBMS | NO-SQL; E.g. MySQL, MongoDB, Cassendra, SQL Server DBs Familiar with Vector DB such as: FAISS, CromaDB, PineCone, Weaveate, Feature Store Experience in implementing and deploying applications on Azure Proficient in creating technical documents like Architecture views, Technology Architecture blueprint and design specification Experienced in using tools like Rational suite, Enterprise Architect, Eclipse, and Source code versioning systems like Git Experience with different development methodologies (RUP | Scrum | Skills : None range and compensation package : None Opportunity Statement : Include a statement on commitment to diversity and inclusivity. (ref:hirist.tech) Show more Show less
Posted 1 day ago
8.0 - 10.0 years
0 Lacs
Greater Bengaluru Area
Remote
About ISOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is the Unified Marketing, Advertising, and Data Decision Intelligence Platform purpose-built to deliver speed to value for marketers. At MADTECH.AI, we make real-time AI-driven insights accessible to everyone. Whether you’re a global or emerging brand, agency, publisher, or data/tech provider, we give you a single source of truth - so you can capture sharper insights that drive better marketing decisions faster and more affordable than ever before. MADTECH.AI unifies and transforms MADTECH data and centralizes decision intelligence in a single, affordable platform. Leave data wrangling, data model building, proactive problem solving, and data visualization to MADTECH.AI. Job Description iSOCRATES is seeking a highly skilled and experienced Lead Data Scientist to spearhead our growing Data Science team. The Lead Data Scientist will be responsible for leading the team that defines, designs, reports on, and analyzes audience, campaign, and programmatic media trading data. This includes working with selected partner-focused Managed Services and Outsourced Services on behalf of our supply-side and demand-side partners. The role will involve collaboration with cross-functional teams and working across a variety of media channels, including digital and offline channels such as display, mobile, video, social, native, and advanced TV/Audio ad products. Key Responsibilities Team Leadership & Management: Lead and mentor a team of data scientists to drive the design, development, and implementation of data-driven solutions for media and marketing campaigns. Advanced Analytics & Data Science Expertise: Provide hands-on leadership in applying rigorous statistical, econometric, and Big Data methods to define requirements, design analytics solutions, analyze results, and optimize economic outcomes. Expertise in modeling techniques including propensity modeling, Media Mix Modeling (MMM), Multi-Touch Attribution (MTA), Recency, Frequency, Monetary (RFM) analysis, Bayesian statistics, and non-parametric methods. Generative AI & NLP: Lead the implementation and development of Generative AI, Large Language Models, and Natural Language Processing (NLP) techniques to enhance data modeling, prediction, and analysis processes. Data Architecture & Management: Architect and manage dynamic data systems from diverse sources, ensuring effective integration and optimization of audience, pricing, and contextual data for programmatic and digital advertising campaigns. Oversee the management of DSPs, SSPs, DMPs, and other data systems integral to the ad-tech ecosystem. Cross-Functional Collaboration: Work closely with Product, System Development, Yield, Operations, Finance, Sales, Business Development, and other teams to ensure seamless data quality, completeness, and predictive outcomes across campaigns. Design and deliver actionable insights, creating innovative, data-driven solutions and reporting tools for use by both iSOCRATES teams and business partners. Predictive Modeling & Optimization: Lead the development of predictive models and analyses to drive programmatic optimization, focusing on revenue, audience behavior, bid actions, and ad inventory optimization (eCPM, fill rate, etc.). Monitor and analyze campaign performance, making data-driven recommendations for optimizations across various media channels including websites, mobile apps, and social media platforms. Data Collection & Quality Assurance: Oversee the design, collection, and management of data, ensuring high-quality standards, efficient storage systems, and optimizations for in-depth analysis and visualization. Guide the implementation of tools for complex data analysis, model development, reporting, and visualization, ensuring alignment with business objectives. Qualifications Master’s or Ph.D. in Statistics, Engineering, Science, or Business with a strong foundation in mathematics and statistics. Looking for an experience of 8 to 10 years with at least 5 years of hands-on experience in data science, predictive analytics, media research, and digital analytics, with a focus on modeling, analysis, and optimization within the media, advertising, or tech industry. At least 3 years of hands-on experience with Generative AI, Large Language Models, and Natural Language Processing techniques. Minimum 3 years of experience in Publisher and Advertiser Audience Data Analytics and Modeling. Proficient in data collection, business intelligence, machine learning, and deep learning techniques using tools such as Python, R, scikit-learn, Hadoop, Spark, MySQL, and AWS S3. Expertise in logistic regression, customer segmentation, persona building, and predictive analytics. Strong analytical and data modeling skills with a deep understanding of audience behavior, pricing strategies, and programmatic media optimization. Experience working with DSPs, SSPs, DMPs, and programmatic systems. Excellent communication and presentation skills, with the ability to communicate complex technical concepts to non-technical stakeholders. Ability to manage multiple tasks and projects effectively, both independently and in collaboration with remote teams. Strong problem-solving skills with the ability to adapt to evolving business needs and deliver solutions proactively. Experience in developing analytics dashboards, visualization tools, and reporting systems. Background in digital media optimization, audience segmentation, and performance analytics. This is an exciting opportunity to take on a leadership role at the forefront of data science in the digital media and advertising space. If you have a passion for innovation, a strong technical background, and the ability to lead a team toward impactful, data-driven solutions, we encourage you to apply. An interest and ability to work in a fast-paced operation on the analytics and revenue side of our business Willing to relocate to Mysuru/ Bengaluru Show more Show less
Posted 1 day ago
50.0 years
0 Lacs
Delhi, India
On-site
About Gap Inc. Our past is full of iconic moments — but our future is going to spark many more. Our brands — Gap, Banana Republic, Old Navy and Athleta — have dressed people from all walks of life and all kinds of families, all over the world, for every occasion for more than 50 years. But we’re more than the clothes that we make. We know that business can and should be a force for good, and it’s why we work hard to make product that makes people feel good, inside and out. It’s why we’re committed to giving back to the communities where we live and work. If you're one of the super-talented who thrive on change, aren't afraid to take risks and love to make a difference, come grow with us. About The Role In this role, you will be accountable for the development process and strategy execution for the assigned product departments. You will also be responsible to execute the overall country and mill/vendor strategy for the department in partnership with the relevant internal teams. What You'll Do Manage the product / vendor development process (P2M) in a timely manner (development sampling, initial costs, negotiation/ production & capacity planning to meets the design aesthetic as well as commercially acceptable quality standards) Manage relationships with mills/vendors and support vendor allocation & aggregated costing along with overall capacity planning aligned to the cost targets to drive competitive advantage Partner with mills/vendors to drive innovation initiatives and superior quality while resolving any product and quality issues pro-actively Onboard new mills/vendors and provide training to existing mills/vendors along with supporting the evaluation process Look for opportunities for continuous improvement in product/vendor development, process management and overall sourcing procedures Able to communicate difficult concepts in a simple manner Participate in projects and assignments of diverse scope Who You Are Experience and knowledge of work specific to global product/vendor development and understands design, merchandising, and global sourcing landscape Ability to drive results through planning and prioritizing along with influencing others and providing recommendations & solutions Present problem analysis and recommended solutions in a creative and logical manner Benefits at Gap Inc. One of the most competitive paid time off plans in the industry Comprehensive health coverage for employees, same-sex partners and their families Health and wellness program: free annual health check-ups, fitness center and Employee Assistance Program Comprehensive benefits to support the journey of parenthood Retirement planning assistance See more of the benefits we offer. Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity. Show more Show less
Posted 1 day ago
12.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Skills: IB Curriculum, Lesson Planning, Literary Analysis, Classroom Management, Student Engagement, English, Essay Grading, IB English Faculty (DP Grades 9 to 12) Location: Gurgaon (1st month onsite) then Work From Home Salary: 78 LPA Work Days: 6 days/week Experience: 12 years Education: Must have BA & MA in English (Honours only) Not Another English Class. A Sparkl-ing Experience. Do you love teaching literature that makes teenagers think , not just memorize? Do you dream of taking students from Shakespeare to Arundhati Roy with purpose and passion? If yes, Sparkl is looking for you! Were hiring an IB English Faculty for DP (Grades 912) someone who brings strong academic grounding, school-teaching experience, and that extra spark that makes stories come alive. Who Were Looking For You must have taught English Literature in a formal school or tuition center (CBSE, ICSE, Cambridge, or IB preferred). Youve handled school curriculum (not vocational/entrance prep like SAT, TOEFL, SSC, CAT, etc). You have a Bachelors + Masters degree in English Honours no exceptions. You know how to explain literary devices, build essay-writing skills, and get teens talking about theme, tone, and character arcs. Youre confident, clear, and love working with high-schoolers. What You'll Be Doing Teach IB DP English for Grades 912 (focus on Literature, writing, comprehension). Guide students through critical analysis, essay structuring, and academic writing. Bring texts alive from Shakespeare to modern prose in ways students will remember. Begin with 1 month of in-person training at our Gurgaon office, then shift to remote work. Why Join Sparkl? Work with top mentors in the IB space Teach smart, curious, high-performing students Young, passionate team and a flexible work environment Real impact real growth Love Literature and Learning? Apply now and lets Sparkl together. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description: Data Engineer Role Overview The Data Engineer will be responsible for ensuring the availability, quality, and transformation of claims and operational data required for model development and integration. The role demands strong data pipeline design and engineering capabilities to support a scalable forecasting and capacity planning framework. Key Responsibilities Gather and process data from multiple sources including claims systems and operational databases. Build and maintain data pipelines to support segmentation and forecasting models. Ensure data integrity, transformation, and enrichment to align with modeling requirements. Collaborate with the Data Scientist to provide model-ready datasets. Support data versioning, storage, and automation for periodic refreshes. Assist in deployment/integration of data flows into operational dashboards or planning tools. Skills & Experience 5+ years of experience in data engineering or ETL development. Proficiency in SQL, Python, and data pipeline tools (e.g., Airflow, dbt, Spark, etc.). Experience with cloud-based data platforms (e.g., Azure, AWS, GCP). Understanding of data architecture and governance best practices. Prior experience working with insurance or operations-related data is a plus. Show more Show less
Posted 1 day ago
12.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Skills: IB Curriculum, Lesson Planning, Literary Analysis, Classroom Management, Student Engagement, English, Essay Grading, IB English Faculty (DP Grades 9 to 12) Location: Gurgaon (1st month onsite) then Work From Home Salary: 78 LPA Work Days: 6 days/week Experience: 12 years Education: Must have BA & MA in English (Honours only) Not Another English Class. A Sparkl-ing Experience. Do you love teaching literature that makes teenagers think , not just memorize? Do you dream of taking students from Shakespeare to Arundhati Roy with purpose and passion? If yes, Sparkl is looking for you! Were hiring an IB English Faculty for DP (Grades 912) someone who brings strong academic grounding, school-teaching experience, and that extra spark that makes stories come alive. Who Were Looking For You must have taught English Literature in a formal school or tuition center (CBSE, ICSE, Cambridge, or IB preferred). Youve handled school curriculum (not vocational/entrance prep like SAT, TOEFL, SSC, CAT, etc). You have a Bachelors + Masters degree in English Honours no exceptions. You know how to explain literary devices, build essay-writing skills, and get teens talking about theme, tone, and character arcs. Youre confident, clear, and love working with high-schoolers. What You'll Be Doing Teach IB DP English for Grades 912 (focus on Literature, writing, comprehension). Guide students through critical analysis, essay structuring, and academic writing. Bring texts alive from Shakespeare to modern prose in ways students will remember. Begin with 1 month of in-person training at our Gurgaon office, then shift to remote work. Why Join Sparkl? Work with top mentors in the IB space Teach smart, curious, high-performing students Young, passionate team and a flexible work environment Real impact real growth Love Literature and Learning? Apply now and lets Sparkl together. Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities Set up and maintain monitoring dashboards for ETL jobs using Datadog, including metrics, logs, and alerts. Monitor daily ETL workflows and proactively detect and resolve data pipeline failures or performance issues. Create Datadog Monitors for job status (success/failure), job duration, resource utilization, and error trends. Work closely with Data Engineering teams to onboard new pipelines and ensure observability best practices. Integrate Datadog with tools. Conduct root cause analysis of ETL failures and performance bottlenecks. Tune thresholds, baselines, and anomaly detection settings in Datadog to reduce false positives. Document incident handling procedures and contribute to improving overall ETL monitoring maturity. Participate in on call rotations or scheduled support windows to manage ETL health. Required Skills & Qualifications 3+ years of experience in ETL/data pipeline monitoring, preferably in a cloud or hybrid environment. Proficiency in using Datadog for metrics, logging, alerting, and dashboards. Strong understanding of ETL concepts and tools (e.g., Airflow, Informatica, Talend, AWS Glue, or dbt). Familiarity with SQL and querying large datasets. Experience working with Python, Shell scripting, or Bash for automation and log parsing. Understanding of cloud platforms (AWS/GCP/Azure) and services like S3, Redshift, BigQuery, etc. Knowledge of CI/CD and DevOps principles related to data infrastructure monitoring. Preferred Qualifications Experience with distributed tracing and APM in Datadog. Prior experience monitoring Spark, Kafka, or streaming pipelines. Familiarity with ticketing tools (e.g., Jira, ServiceNow) and incident management workflows. Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, BigQuery, Dataproc, Pub/Sub, and real-time streaming architectures is preferred.The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week.The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. A key aspect of the MDLZ Google cloud BigQuery platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes man '8+ years of overall industry experience and minimum of 8-10 years of experience building and deploying large scale data processing pipelines in a production environment Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platforms Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects. Implementation and automation of Internal data extraction from SAP BW / HANA Implementation and automation of External data extraction from openly available internet data sources via APIs Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases Exposing data via Alteryx, SQL Database for consumption in Tableau Data documentation maintenance/update Collaboration and workflow using a version control system (e.g., Git Hub) Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts. ETL or Data integration tool: Experience in Talend is highly desirable. Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query. Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc. Programming: Understanding of OOPs concepts and hands-on experience with Python/Java for programming and scripting. Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks. Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx Keep our data separated and secure across national boundaries through multiple data centers and Azure regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Skills And Experience Rich experience in working with FMCG industry. Deep knowledge in manipulating, processing, and extracting value from datasets; + 5 years of experience in data engineering, business intelligence, data science, or related field; Proficiency with Programming Languages: SQL, Python, R Spark, PySpark, SparkR, SQL for data processing; Strong project management skills and ability to plan and prioritize work in a fast-paced environment; Experience with: MS Azure Data Factory, MS Azure Data Lake Store, SQL Database, SAP BW/ ECC / HANA, Alteryx, Tableau; Ability to think creatively, highly-driven and self-motivated; Knowledge of SAP BW for HANA (Extractors, Transformations, Modeling aDSOs, Queries, OpenHubs) No Relocation support available Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, BigQuery, Dataproc, Pub/Sub, and real-time streaming architectures is preferred.The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week. A key aspect of the MDLZ DataHub Google BigQuery platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes managing pipelines for different data drivers (> 6 months vs. 0-6 months), ensuring consistent input to o9. '6+ years of overall industry experience and minimum of 6-8 years of experience building and deploying large scale data processing pipelines in a production environment Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platforms Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects. Implementation and automation of Internal data extraction from SAP BW / HANA Implementation and automation of External data extraction from openly available internet data sources via APIs Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases Exposing data via Alteryx, SQL Database for consumption in Tableau Data documentation maintenance/update Collaboration and workflow using a version control system (e.g., Git Hub) Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts. ETL or Data integration tool: Experience in Talend is highly desirable. Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query. Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc. Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week. Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks. Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx Keep our data separated and secure across national boundaries through multiple data centers and Azure regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Skills And Experience Deep knowledge in manipulating, processing, and extracting value from datasets; Atleast 2 years of FMCG/CPG industry experience. + 5 years of experience in data engineering, business intelligence, data science, or related field; Proficiency with Programming Languages: SQL, Python, R Spark, PySpark, SparkR, SQL for data processing; Strong project management skills and ability to plan and prioritize work in a fast-paced environment; Experience with: MS Azure Data Factory, MS Azure Data Lake Store, SQL Database, SAP BW/ ECC / HANA, Alteryx, Tableau; Ability to think creatively, highly-driven and self-motivated; Knowledge of SAP BW for HANA (Extractors, Transformations, Modeling aDSOs, Queries, OpenHubs) No Relocation support available Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Looking for a savvy Data Engineer to join team of Modeling / Architect experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week. In this role, you will assist in maintaining the MDLZ DataHub Google BigQuery data pipelines and corresponding platforms (on-prem and cloud), working closely with global teams on DataOps initiatives. The D4GV platform spans across three key GCP instances: NALA, MEU, and AMEA, supporting the global rollout of o9 across all Mondelēz BUs over the next three years 5+ years of overall industry experience and minimum of 2-4 years of experience building and deploying large scale data processing pipelines in a production environment Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platforms Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects. Implementation and automation of Internal data extraction from SAP BW / HANA Implementation and automation of External data extraction from openly available internet data sources via APIs Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR Data ingestion and management in Hadoop / Hive Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases Exposing data via Alteryx, SQL Database for consumption in Tableau Data documentation maintenance/update Collaboration and workflow using a version control system (e.g., Git Hub) Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Skills And Experience Deep knowledge in manipulating, processing, and extracting value from datasets; support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, BigQuery, Dataproc, Pub/Sub, and real-time streaming architectures is preferred. + 5 years of experience in data engineering, business intelligence, data science, or related field; Proficiency with Programming Languages: SQL, Python, R Spark, PySpark, SparkR, SQL for data processing; Strong project management skills and ability to plan and prioritize work in a fast-paced environment; Experience with: MS Azure Data Factory, MS Azure Data Lake Store, SQL Database, SAP BW/ ECC / HANA, Alteryx, Tableau; Ability to think creatively, highly-driven and self-motivated; Knowledge of SAP BW for HANA (Extractors, Transformations, Modeling aDSOs, Queries, OpenHubs) No Relocation support available Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less
Posted 1 day ago
1.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At TVL Media, we specialize in driving innovative digital strategies and creative storytelling that captivates, converts, and builds lasting brand equity. We’re on a mission to elevate brands through powerful content and data-backed digital marketing strategies. If you're a passionate writer who can craft compelling content across channels and formats, this is your chance to grow with a fast-paced, creative team. We are looking for a Content Writer who thrives in the digital world and knows how to turn ideas into impactful content across platforms. The ideal candidate will have a solid grasp of content strategy, digital storytelling, and platform-optimized writing, with experience producing blog posts, LinkedIn content, carousels, ebooks, and more. Key Responsibilities Content Creation & Strategy Write engaging blog posts tailored for SEO and reader value. Craft LinkedIn posts and carousels that spark engagement and build authority. Research, outline, and develop long-form content such as ebooks and whitepapers. Collaborate with designers to shape content for visual platforms (social media, carousels, infographics). Digital Marketing Alignment Work closely with the marketing team to support campaigns with aligned messaging. Develop persuasive copy for landing pages, email marketing, and paid ads. Stay updated with digital marketing trends, tools, and tone. Content Optimization Use SEO best practices, tools (like Surfer SEO, Clearscope, or SEMrush), and analytics to optimize performance. Conduct keyword research and implement strategies to boost search visibility. Ensure consistency in brand voice and adherence to content guidelines. Cross-functional Collaboration Coordinate with social media managers, designers, and campaign leads. Attend brainstorming sessions and contribute ideas for new formats and series. Qualifications Minimum 1 year of proven experience in content writing, preferably in a digital marketing or agency setup. Excellent command of English (written and verbal). Portfolio demonstrating versatility across blogs, ebooks, LinkedIn posts, carousels, and more. Working knowledge of content management systems (e.g., WordPress), SEO tools, and basic analytics. Ability to adapt tone and style based on target audiences and platforms. About Company: TVL Media is a values-driven digital marketing agency dedicated to empowering our customers. Over the years, we have worked with Fortune 100s and brand-new startups. We help ambitious businesses like yours generate more profits by building awareness, driving web traffic, connecting with customers, and growing overall sales. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
BigData oracle PySpark Experience in SQL and understanding of ETL best practices Should have good hands on in ETL/Big Data development Extensive hands on experience in Scala Should have experience in Spark/Yarn, troubleshooting Spark, Linux, Python Setting up a Hadoop cluster, Backup, recovery, and maintenance. Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The demand for professionals with expertise in Spark is on the rise in India. Spark, an open-source distributed computing system, is widely used for big data processing and analytics. Job seekers in India looking to explore opportunities in Spark can find a variety of roles in different industries.
These cities have a high concentration of tech companies and startups actively hiring for Spark roles.
The average salary range for Spark professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
Salaries may vary based on the company, location, and specific job requirements.
In the field of Spark, a typical career progression may look like: - Junior Developer - Senior Developer - Tech Lead - Architect
Advancing in this career path often requires gaining experience, acquiring additional skills, and taking on more responsibilities.
Apart from proficiency in Spark, professionals in this field are often expected to have knowledge or experience in: - Hadoop - Java or Scala programming - Data processing and analytics - SQL databases
Having a combination of these skills can make a candidate more competitive in the job market.
As you explore opportunities in Spark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this growing field and advance your career in the tech industry. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2