Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements Identify and analyze issues, make recommendations, and implement solutions Utilize knowledge of business processes, system processes, and industry standards to solve complex issues Analyze information and make evaluative judgements to recommend solutions and improvements Conduct testing and debugging, utilize script tools, and write basic code for design specifications Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures Develop working knowledge of Citi’s information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Additional Job Description We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Responsibilities Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities Implementing data wrangling, scarping, cleaning using both Java or Python Strong experience on data structure. Extensively work on API integration. Monitoring performance and advising any necessary infrastructure changes Defining data retention policies Skills And Qualifications Proficient understanding of distributed computing principles Proficient in Java or Pyhton and some part of machine learning Proficiency with Hadoop v2, MapReduce, HDFS,Pyspark,Spark Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala Experience with Spark Experience with integration of data from multiple data sources Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of various ETL techniques and frameworks, such as Flume Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O Good understanding of Lambda Architecture, along with its advantages and drawbacks Experience with Cloudera/MapR/Hortonworks Qualifications: 0-2 years of relevant experience Experience in programming/debugging used in business applications Working knowledge of industry practice and standards Comprehensive knowledge of specific business area for application development Working knowledge of program languages Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Position Overview We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have extensive experience with AWS Glue, Apache Airflow, Kafka, SQL, Python and DataOps tools and technologies. Knowledge of SAP HANA & Snowflake is a plus. This role is critical for designing, developing, and maintaining our clients data pipeline architecture, ensuring the efficient and reliable flow of data across the organization. Key Responsibilities Design, Develop, and Maintain Data Pipelines : Develop robust and scalable data pipelines using AWS Glue, Apache Airflow, and other relevant technologies. Integrate various data sources, including SAP HANA, Kafka, and SQL databases, to ensure seamless data flow and processing. Optimize data pipelines for performance and reliability. Data Management And Transformation Design and implement data transformation processes to clean, enrich, and structure data for analytical purposes. Utilize SQL and Python for data extraction, transformation, and loading (ETL) tasks. Ensure data quality and integrity through rigorous testing and validation processes. Collaboration And Communication Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs. Collaborate with cross-functional teams to implement DataOps practices and improve data life cycle management. Monitoring And Optimization Monitor data pipeline performance and implement improvements to enhance efficiency and reduce latency. Troubleshoot and resolve data-related issues, ensuring minimal disruption to data workflows. Implement and manage monitoring and alerting systems to proactively identify and address potential issues. Documentation And Best Practices Maintain comprehensive documentation of data pipelines, transformations, and processes. Adhere to best practices in data engineering, including code versioning, testing, and deployment procedures. Stay up-to-date with the latest industry trends and technologies in data engineering and DataOps. Required Skills And Qualifications Technical Expertise : Extensive experience with AWS Glue for data integration and transformation. Proficient in Apache Airflow for workflow orchestration. Strong knowledge of Kafka for real-time data streaming and processing. Advanced SQL skills for querying and managing relational databases. Proficiency in Python for scripting and automation tasks. Experience with SAP HANA for data storage and management. Familiarity with DataOps tools and methodologies for continuous integration and delivery in data engineering. Preferred Skills Knowledge of Snowflake for cloud-based data warehousing solutions. Experience with other AWS data services such as Redshift, S3, and Athena. Familiarity with big data technologies such as Hadoop, Spark, and Hive. Soft Skills Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Detail-oriented with a commitment to data quality and accuracy. Ability to work independently and manage multiple projects simultaneously. (ref:hirist.tech) Show more Show less
Posted 3 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Designation : Solution Architect Office Location : Gurugram Position Description As a Solution Architect, you will be responsible for leading the development and delivery of the platforms. This includes overseeing the entire product lifecycle from the solution until execution and launch, building the right team & close collaboration with business and product teams. Primary Responsibilities Design end-to-end solutions that meet business requirements and align with the enterprise architecture. Define the architecture blueprint, including integration, data flow, application, and infrastructure components. Evaluate and select appropriate technology stacks, tools, and frameworks. Ensure proposed solutions are scalable, maintainable, and secure. Collaborate with business and technical stakeholders to gather requirements and clarify objectives. Act as a bridge between business problems and technology solutions. Guide development teams during the execution phase to ensure solutions are implemented according to design. Identify and mitigate architectural risks and issues. Ensure compliance with architecture principles, standards, policies, and best practices. Document architectures, designs, and implementation decisions clearly and thoroughly. Identify opportunities for innovation and efficiency within existing and upcoming solutions. Conduct regular performance and code reviews, and provide feedback to the development team members to improve professional development. Lead proof-of-concept initiatives to evaluate new Responsibilities : Facilitate daily stand-up meetings, sprint planning, sprint review, and retrospective meetings. Work closely with the product owner to priorities the product backlog and ensure that user stories are well-defined and ready for development. Identify and address issues or conflicts that may impact project delivery or team morale. Experience with Agile project management tools such as Jira and Trello. Required Skills Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience in software engineering, with at least 3 years in a solution architecture or technical leadership role. Proficiency with AWS or GCP cloud platform. Strong implementation knowledge in JS tech stack, NodeJS, ReactJS, Experience with JS stack - ReactJS, NodeJS. Experience with Database Engines - MySQL and PostgreSQL with proven knowledge of Database migrations, high throughput and low latency use cases. Experience with key-value stores like Redis, MongoDB and similar. Preferred knowledge of distributed technologies - Kafka, Spark, Trino or similar with proven experience in event-driven data pipelines. Proven experience with setting up big data pipelines to handle high volume transactions and transformations. Experience with BI tools - Looker, PowerBI, Metabase or similar. Experience with Data warehouses like BigQuery, Redshift, or similar. Familiarity with CI/CD pipelines, containerization (Docker/Kubernetes), and IaC to Have : Certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, TOGAF, etc. Experience setting up analytical pipelines using BI tools (Looker, PowerBI, Metabase or similar) and low-level Python tools like Pandas, Numpy, PyArrow Experience with data transformation tools like DBT, SQLMesh or similar. Experience with data orchestration tools like Apache Airflow, Kestra or similar. Work Environment Details About Affle : Affle is a global technology company with a proprietary consumer intelligence platform that delivers consumer engagement, acquisitions, and transactions through relevant Mobile Advertising. The platform aims to enhance returns on marketing investment through contextual mobile ads and also by reducing digital ad fraud. While Affle's Consumer platform is used by online & offline companies for measurable mobile advertising, its Enterprise platform helps offline companies to go online through platform-based app development, enablement of O2O commerce and through its customer data platform. Affle India successfully completed its IPO in India on 08. Aug.2019 and now trades on the stock exchanges (BSE : 542752 & NSE : AFFLE). Affle Holdings is the Singapore based promoter for Affle India and its investors include Microsoft, Bennett Coleman &Company (BCCL) amongst others. For more details : www.affle.com About BU Ultra - Access deals, coupons, and walled gardens based user acquisition on a single platform to offer bottom-funnel optimization across multiple inventory sources. For more details, please visit : https : //www.ultraplatform.io/ (ref:hirist.tech) Show more Show less
Posted 3 days ago
12.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Skills: IB Curriculum, Lesson Planning, Literary Analysis, Classroom Management, Student Engagement, English, Essay Grading, IB English Faculty (DP Grades 9 to 12) Location: Gurgaon (1st month onsite) then Work From Home Salary: 78 LPA Work Days: 6 days/week Experience: 12 years Education: Must have BA & MA in English (Honours only) Not Another English Class. A Sparkl-ing Experience. Do you love teaching literature that makes teenagers think , not just memorize? Do you dream of taking students from Shakespeare to Arundhati Roy with purpose and passion? If yes, Sparkl is looking for you! Were hiring an IB English Faculty for DP (Grades 912) someone who brings strong academic grounding, school-teaching experience, and that extra spark that makes stories come alive. Who Were Looking For You must have taught English Literature in a formal school or tuition center (CBSE, ICSE, Cambridge, or IB preferred). Youve handled school curriculum (not vocational/entrance prep like SAT, TOEFL, SSC, CAT, etc). You have a Bachelors + Masters degree in English Honours no exceptions. You know how to explain literary devices, build essay-writing skills, and get teens talking about theme, tone, and character arcs. Youre confident, clear, and love working with high-schoolers. What You'll Be Doing Teach IB DP English for Grades 912 (focus on Literature, writing, comprehension). Guide students through critical analysis, essay structuring, and academic writing. Bring texts alive from Shakespeare to modern prose in ways students will remember. Begin with 1 month of in-person training at our Gurgaon office, then shift to remote work. Why Join Sparkl? Work with top mentors in the IB space Teach smart, curious, high-performing students Young, passionate team and a flexible work environment Real impact real growth Love Literature and Learning? Apply now and lets Sparkl together. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company: Hiredahead.com Website: Visit Website Business Type: Enterprise Company Type: Product & Service Business Model: B2B Funding Stage: Pre-seed Industry: Software Salary Range: ₹ 20-32 Lacs PA Job Description Key Responsibilities: Backend Development Design, develop, and maintain scalable backend systems and APIs using Go or C#. Implement and optimize cloud-based BI products and solutions. Collaborate with frontend teams to ensure seamless integration of backend functionalities. API Development: ○ Develop robust RESTful and GraphQL APIs using Go or C#. ○ Utilize frameworks like GIN for Go and maintain industry standards in API security and performance. Cloud and Infrastructure: ○ Deploy and maintain applications in cloud environments such as AWS, Azure, or GCP. ○ Implement serverless computing, containerization, and database optimization. ○ Optimize backend infrastructure for performance and scalability in the cloud. Data Management: ○ Work on data-driven backend systems with efficient data modeling and storage solutions. ○ Utilize relational and columnar databases (e.g., DuckDB, Apache Pinot, Snowflake, BigQuery). ○ Handle large-scale data processing using distributed computing frameworks like Apache Spark. Development Practices: ○ Participate in code reviews and collaborate with QA teams to identify and resolve defects. ○ Conduct testing and debugging to ensure application reliability and performance. ○ Stay updated with emerging technologies and industry trends. Collaboration and Training: ○ Work closely with cross-functional teams, including senior developers and project stakeholders. ○ Contribute to task estimations and provide technical expertise to the team. ○ Mentor and support team members while actively enhancing your technical skills. Skills & Expertise Programming Languages & Frameworks: ○ Strong experience in Go programming with expertise in microservices and gRPC. ○ Proficiency in Go or C#, including .NET and .NET 8. ○ Familiarity with front-end technologies (HTML, CSS, JavaScript) is a plus. Cloud Platforms: ○ Hands-on experience with AWS (EC2, ECS, EKS, Lambda), Azure, or GCP. ○ Familiarity with cloud services like S3, Data Lake, and Cloud Storage. Databases: ○ Proficient in SQL, relational databases, and optimization techniques. ○ Experience with modern data warehouses and columnar databases. Development Tools: ○ Knowledge of CI/CD systems (e.g., Jenkins, Bitbucket Pipelines). ○ Expertise in containerization technologies (Docker, Kubernetes). ○ Exposure to version control systems (e.g., Git). Soft Skills: ○ Strong Problem-solving Abilities And Effective Communication Skills. ○ Passion for technology and a commitment to continuous learning. Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 5+ years of experience in backend development using Go or C#. Familiarity with best practices in software development and SDLC. Show more Show less
Posted 3 days ago
3.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Designation : Data Scientist Experience : 3-8 Years Location : Chennai Job Description We are seeking a skilled Data Scientist to join our team. In this role, you will leverage advanced analytics and machine learning techniques to extract actionable insights from complex datasets, driving data-informed decision-making across the organization. Key Responsibilities Data Collection & Preprocessing : Gather, clean, and preprocess structured and unstructured data from various sources to ensure high-quality datasets for analysis. Model Development : Design, develop, and deploy predictive models and machine learning algorithms to address business challenges. Data Analysis : Conduct exploratory data analysis to identify trends, patterns, and anomalies, providing insights to inform strategic decisions. Visualization & Reporting : Create compelling data visualizations and reports to effectively communicate findings to both technical and non-technical stakeholders. Collaboration : Work closely with cross-functional teams, including engineering and product development, to integrate data-driven solutions into business processes. Required Skills & Qualifications Technical Proficiency : Strong programming skills in Python or R; experience with SQL for data querying. Machine Learning Expertise : Solid understanding of machine learning algorithms and frameworks; experience with libraries such as scikit-learn, TensorFlow, or PyTorch. Statistical Analysis : Strong foundation in statistics and data analysis techniques. Data Visualization : Proficiency in tools like Tableau, Power BI, or Matplotlib for creating insightful visualizations. Preferred Qualifications Big Data Technologies : Familiarity with big data platforms like Hadoop or Spark. Cloud Platforms : Experience with cloud services such as AWS, Azure, or Google Cloud. Business Acumen : Ability to translate complex data findings into actionable business strategies. (ref:hirist.tech) Show more Show less
Posted 3 days ago
12.0 years
0 Lacs
Delhi, India
Remote
Skills: IB Mathematics Curriculum, Classroom Management, Lesson Planning, Online teaching, Mathematics, Student Engagement, IB Maths Faculty (MYP / DP) Location: Gurgaon (1st month onsite) then Work From Home Salary: 68 LPA 6 days/week | Immediate Joiners Preferred Hey Math Whiz, Ready to Teach Without the Boring Bits? If you believe teaching math should be more aha! than ugh!, welcome to your dream job. At Sparkl , we dont just solve equations we spark curiosity, build logic, and change how learning feels. Were looking for a young, sharp IB Maths Educator who can handle both MYP & DP grades with confidence and creativity. Someone who knows that x is not just a variable its a whole vibe. What You'll Be Doing Teach IB Math (MYP or DP) with clarity, confidence, and coolness Help students break down complex problems and love the process Use Sparkls resources + your own flair to design interactive lessons Join us in Gurgaon for the 1st month of onboarding, then switch to WFH Who You Are 12 years of teaching/tutoring experience in IB, IGCSE, or similar Graduate/Postgraduate in Math or related field Great communicator with excellent English Calm, curious, collaborative you love teaching and it shows Why Youll Love Sparkl Gen Z-friendly, mentor-led work culture Personalized learning platform with real impact Young team, real growth, and no outdated teaching drama Apply now and lets turn your math mojo into a movement. Show more Show less
Posted 3 days ago
12.0 years
0 Lacs
Delhi, India
Remote
Skills: Physics, IB Physics, Teacher, Storytelling, Classroom management, Online teaching, IB Curriculum, IB Physics Faculty (MYP + DP) Location: Gurgaon (1st month onsite) then Work From Home Salary: 78 LPA 6 days/week | Immediate Joiners Preferred Physics = Fun. Who Knew? (You Did.) If you can turn Newtons laws into a Netflix-worthy explanation, and you genuinely love helping teens get the point of Physics then we want you at Sparkl . Were looking for a young IB Physics Educator to teach both MYP & DP , someone who can go from talking atoms to astrophysics and make it fun. The Role Includes Teaching IB Physics to students in Grades 612 (MYP & DP) Creating energy in the virtual classroom minus the resistance Using experiments, analogies, and storytelling to explain tough concepts Starting your journey with 1 month of training in Gurgaon, then fully remote You Should Be Someone Who Has 12 years of teaching or tutoring experience (IB/IGCSE a plus) Holds a graduate/postgraduate degree in Physics Communicates clearly, creatively, and confidently in English Cares deeply about student learning (not just the syllabus) Why Work With Sparkl? Young and fun team, serious about learning Teach ambitious, globally-minded students Mentorship and training that actually helps you grow Work-from-home flexibility after initial onboarding Dont just teach Physics spark a love for it. Apply today! Show more Show less
Posted 3 days ago
0 years
0 Lacs
Delhi, India
On-site
Key Responsibilities Design, build, and maintain scalable, reliable, and efficient data pipelines to support data analytics and business intelligence needs. Optimize and automate data workflows, enhancing the efficiency of data processing and reducing latency. Implement and maintain data storage solutions, ensuring that data is organized, secure, and readily accessible. Provide expertise in ETL processes, data wrangling, and data transformation techniques. Collaborate with technology teams to ensure that data engineering solutions align with overall business goals. Stay current with industry best practices and emerging technologies in data engineering, implementing improvements as : Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a related field. Experience with Agile methodologies and software development project : Proven experience in data engineering, with expertise in building and managing data pipelines, ETL processes, and data warehousing. Proficiency in SQL, Python, and other programming languages commonly used in data engineering. Experience with cloud platforms such as AWS, Azure, or Google Cloud, and familiarity with cloud-based data storage and processing tools (e.g., S3, Redshift, BigQuery, etc.). Good to have familiarity with big data technologies (e.g., Hadoop, Spark) and real-time data processing. Strong understanding of database management systems and data modeling techniques. Experience with BI tools like Tableau, Power BI along with ETL tools like Alteryx, or similar, and ability to work closely with analytics teams. High attention to detail and commitment to data quality and accuracy. Ability to work independently and as part of a team, with strong collaboration skills. Highly adaptive and comfortable working within a complex, fast-paced environment. (ref:hirist.tech) Show more Show less
Posted 3 days ago
0 years
0 Lacs
Delhi, India
On-site
What Youll Do Architect and scale modern data infrastructure: ingestion, transformation, warehousing, and access Define and drive enterprise data strategygovernance, quality, security, and lifecycle management Design scalable data platforms that support both operational insights and ML/AI applications Translate complex business requirements into robust, modular data systems Lead cross-functional teams of engineers, analysts, and developers on large-scale data initiatives Evaluate and implement best-in-class tools for orchestration, warehousing, and metadata management Establish technical standards and best practices for data engineering at scale Spearhead integration efforts to unify data across legacy and modern platforms What You Bring Experience in data engineering, architecture, or backend systems Strong grasp of system design, distributed data platforms, and scalable infrastructure Deep hands-on experience with cloud platforms (AWS, Azure, or GCP) and tools like Redshift, BigQuery, Snowflake, S3, Lambda Expertise in data modeling (OLTP/OLAP), ETL pipelines, and data warehousing Experience with big data ecosystems: Kafka, Spark, Hive, Presto Solid understanding of data governance, security, and compliance frameworks Proven track record of technical leadership and mentoring Strong collaboration and communication skills to align tech with business Bachelors or Masters in Computer Science, Data Engineering, or a related field Nice To Have (Your Edge) Experience with real-time data streaming and event-driven architectures Exposure to MLOps and model deployment pipelines Familiarity with data DevOps and Infra as Code (Terraform, CloudFormation, CI/CD pipelines) (ref:hirist.tech) Show more Show less
Posted 3 days ago
68.0 years
0 Lacs
Greater Kolkata Area
On-site
Role Overview We are looking for a highly skilled and motivated Senior Data Scientist to join our team. In this role, you will design, develop, and implement advanced data models and algorithms that drive strategic decision-making across the organization. You will work closely with product, engineering, and business teams to uncover insights and deliver data-driven solutions that enhance the performance and scalability of our products and services. Key Responsibilities Develop, deploy, and maintain machine learning models and advanced analytics pipelines. Analyze complex datasets to identify trends, patterns, and actionable insights. Collaborate with cross-functional teams (Engineering, Product, Marketing) to define and execute data science strategies. Build and improve predictive models using supervised and unsupervised learning techniques. Translate business problems into data science projects with measurable impact. Design and conduct experiments, A/B tests, and statistical analyses to validate hypotheses and guide product development. Create dashboards and visualizations to communicate findings to technical and non-technical stakeholders. Stay up-to-date with industry trends, best practices, and emerging technologies in data science and machine learning. Ensure data quality and governance standards are maintained across all projects. Required Skills And Qualifications 68 years of hands-on experience in Data Science, Machine Learning, and Statistical Modeling. Proficiency in programming languages such as Python, R, and SQL. Strong foundation in data analysis, data wrangling, and feature engineering. Expertise in building and deploying models using tools such as scikit-learn, TensorFlow, PyTorch, or similar frameworks. Experience with big data platforms (e.g., Spark, Hadoop) and cloud services (AWS, GCP, Azure) is a plus. Deep understanding of statistical techniques including hypothesis testing, regression, and Bayesian methods. Excellent communication skills with the ability to explain complex technical concepts to non-technical audiences. Proven track record of working on cross-functional projects and delivering data-driven solutions that impact business outcomes. Master's or Ph.D. in Data Science, Computer Science, Statistics, Mathematics, or a related field. Experience with NLP, computer vision, or deep learning techniques. Knowledge of data engineering principles and ETL processes. Familiarity with version control (Git), agile methodologies, and CI/CD pipelines. Contributions to open-source data science projects or publications in relevant fields. (ref:hirist.tech) Show more Show less
Posted 3 days ago
7.0 years
0 Lacs
Greater Kolkata Area
Remote
Omni's team is passionate about Commerce and Digital Transformation. We've been successfully delivering Commerce solutions for clients across North America, Europe, Asia, and Australia. The team has experience executing and delivering projects in B2B and B2C solutions. Job Description This is a remote position. We are seeking a Senior Data Engineer to architect and build robust, scalable, and efficient data systems that power AI and Analytics solutions. You will design end-to-end data pipelines, optimize data storage, and ensure seamless data availability for machine learning and business analytics use cases. This role demands deep engineering excellence balancing performance, reliability, security, and cost to support real-world AI applications. Key Responsibilities Architect, design, and implement high-throughput ETL/ELT pipelines for batch and real-time data processing. Build cloud-native data platforms : data lakes, data warehouses, feature stores. Work with structured, semi-structured, and unstructured data at petabyte scale. Optimize data pipelines for latency, throughput, cost-efficiency, and fault tolerance. Implement data governance, lineage, quality checks, and metadata management. Collaborate closely with Data Scientists and ML Engineers to prepare data pipelines for model training and inference. Implement streaming data architectures using Kafka, Spark Streaming, or AWS Kinesis. Automate infrastructure deployment using Terraform, CloudFormation, or Kubernetes operators. Requirements 7+ years in Data Engineering, Big Data, or Cloud Data Platform roles. Strong proficiency in Python and SQL. Deep expertise in distributed data systems (Spark, Hive, Presto, Dask). Cloud-native engineering experience (AWS, GCP, Azure) : BigQuery, Redshift, EMR, Databricks, etc. Experience designing event-driven architectures and streaming systems (Kafka, Pub/Sub, Flink). Strong background in data modeling (star schema, OLAP cubes, graph databases). Proven experience with data security, encryption, compliance standards (e.g., GDPR, HIPAA). Preferred Skills Experience in MLOps enablement : creating feature stores, versioned datasets. Familiarity with real-time analytics platforms (Clickhouse, Apache Pinot). Exposure to data observability tools like Monte Carlo, Databand, or similar. Passionate about building high-scale, resilient, and secure data systems. Excited to support AI/ML innovation with state-of-the-art data infrastructure. Obsessed with automation, scalability, and best engineering practices. (ref:hirist.tech) Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Description Responsibilities : Architect and design end-to-end data solutions on Cloud Platform, focusing on data warehousing and big data platforms. Collaborate with clients, developers, and architecture teams to understand requirements and translate them into effective data solutions. Develop high-level and detailed data architecture and design documentation. Implement data management and data governance strategies, ensuring compliance with industry standards. Architect both batch and real-time data solutions, leveraging cloud native services and technologies. Design and manage data pipeline processes for historic data migration and data integration. Collaborate with business analysts to understand domain data requirements and incorporate them into the design deliverables. Drive innovation in data analytics by leveraging cutting-edge technologies and methodologies. Demonstrate excellent verbal and written communication skills to communicate complex ideas and concepts effectively. Stay updated on the latest advancements in Data Analytics, data architecture, and data management techniques. Requirements Minimum of 5 years of experience in a Data Architect role, supporting warehouse and Cloud data platforms/environments (Azure). Extensive Experience with common Azure services such as ADLS, Synapse, Databricks, Azure SQL etc. Experience on Azure services such as ADF, Polybase, Azure Stream Analytics Proven expertise in Databricks architecture, Delta Lake, Delta sharing, Unity Catalog, Data pipelines, and Spark tuning. Strong knowledge of Power BI architecture, DAX, and dashboard optimization. In-depth experience with SQL, Python, and/or PySpark. Hands-on knowledge of data governance, lineage, and cataloging tools such as Azure Purview and Unity Catalog. Experience in implementing CI/CD pipelines for data and BI components (e.g., using DevOps or GitHub). Experience on building symantec modeling in Power BI. Strong knowledge of Power BI architecture, DAX, and dashboard optimization. Strong expertise in data exploration using SQL and a deep understanding of data relationships. Extensive knowledge and implementation experience in data management, governance, and security frameworks. Proven experience in creating high-level and detailed data architecture and design documentation. Strong aptitude for business analysis to understand domain data requirements. Proficiency in Data Modeling using any Modeling tool for Conceptual, Logical, and Physical models is preferred Hands-on experience with architecting end-to-end data solutions for both batch and real-time designs. Ability to collaborate effectively with clients, developers, and architecture teams to implement enterprise-level data solutions. Familiarity with Data Fabric and Data Mesh architecture is a plus. Excellent verbal and written communication skills. (ref:hirist.tech) Show more Show less
Posted 3 days ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Key Responsibilities Data Science Leadership : Utilize in-depth knowledge of data science and data analytics to architect and drive strategic initiatives across departments. Stakeholder Collaboration : Work closely with cross-functional teams to define and implement data strategies aligned with business objectives. Model Development : Design, build, and deploy predictive models and machine learning algorithms to solve complex business problems and uncover actionable insights. Integration and Implementation : Collaborate with IT and domain experts to ensure smooth integration of data science models into existing business workflows and systems. Innovation and Optimization : Continuously evaluate new data tools, methodologies, and technologies to enhance analytical capabilities and operational efficiency. Data Governance : Promote data quality, consistency, and security standards across the organization. Required Qualifications Bachelors or Masters Degree in Economics, Statistics, Data Science, or a related field. A minimum of 8 years of relevant experience in data analysis, data science, or analytics roles. At least 3 years of direct experience as a Data Scientist, preferably in enterprise or analytics lab environments. Possession of at least one recognized data science certification, such as : Certified Analytics Professional (CAP) Google Professional Data Engineer Proficiency in data visualization and storytelling tools and libraries, such as Matplotlib, Seaborn, and Tableau. Strong foundation in statistical modeling and risk analytics, with proven experience building and validating such models. Preferred Skills And Attributes Strong programming skills in Python, R, or similar languages. Experience with cloud-based analytics platforms (AWS, GCP, or Azure). Familiarity with data engineering concepts and tools (e.g., SQL, Spark, Hadoop). Excellent problem-solving, communication, and stakeholder engagement skills. Ability to manage multiple projects and mentor junior team members. (ref:hirist.tech) Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
About Sleek Through proprietary software and AI, along with a focus on customer delight, Sleek makes the back-office easy for micro SMEs. We give Entrepreneurs time back to focus on what they love doing growing their business and being with customers. With a surging number of Entrepreneurs globally, we are innovating in a highly lucrative space. We Operate 3 Business Segments Corporate Secretary : Automating the company incorporation, secretarial, filing, Nominee Director, mailroom and immigration processes via custom online robots and SleekSign. We are the market leaders in Singapore with : 5% market share of all new business incorporations. Accounting & Bookkeeping : Redefining what it means to do Accounting, Bookkeeping, Tax and Payroll thanks to our proprietary SleekBooks ledger, AI tools and exceptional customer service. FinTech payments : Overcoming a key challenge for Entrepreneurs by offering digital banking services to new businesses. Sleek launched in 2017 and now has around 15,000 customers across our offices in Singapore, Hong Kong, Australia and the UK. We have around 450 staff with an intact startup mindset. We have achieved >70% compound annual growth in Revenue over the last 5 years and as a result have been recognised by The Financial Times, The Straits Times, Forbes and LinkedIn as one of the fastest growing companies in Asia. Role Backed by world-class investors, we are on track to be one of the few cash flow positive, tech-enabled unicorns based out of The Role : We are looking for an experienced Senior Data Engineer to join our growing team. As a key member of our data team, you will design, build, and maintain scalable data pipelines and infrastructure to enable data-driven decision-making across the organization. This role is ideal for a proactive, detail-oriented individual passionate about optimizing and leveraging data for impactful business : Work closely with cross-functional teams to translate our business vision into impactful data solutions. Drive the alignment of data architecture requirements with strategic goals, ensuring each solution not only meets analytical needs but also advances our core objectives. 3, Be pivotal in bridging the gap between business insights and technical execution by tackling complex challenges in data integration, modeling, and security, and by setting the stage for exceptional data performance and insights. Shape the data roadmap, influence design decisions, and empower our team to deliver innovative, scalable, high-quality data solutions every : Achieve and maintain a data accuracy rate of at least 99% for all business-critical dashboards by start of day (accounting for corrections and job failures), with a 24-business hour detection of error and 5-day correction SLA. 95% of data on dashboards originates from technical data pipelines to mitigate data drift. Set up strategic dashboards based on Business Needs which are robust, scalable, easy and quick to operate and maintain. Reduce costs of data warehousing and pipelines by 30%, then maintaining costs as data needs grow. Achieve 50 eNPS on data services (e.g. dashboards) from key business : Data Pipeline Development : Design, implement, and optimize robust, scalable ETL/ELT pipelines to process large volumes of structured and unstructured data. Data Modeling : Develop and maintain conceptual, logical, and physical data models to support analytics and reporting requirements. Infrastructure Management : Architect, deploy, and maintain cloud-based data platforms (e.g. , AWS, GCP). Collaboration : Work closely with data analysts, business owners, and stakeholders to understand data requirements and deliver reliable solutions, including designing and implementing robust, efficient and scalable data visualization on Tableau or LookerStudio. Data Governance : Ensure data quality, consistency, and security through robust validation and monitoring frameworks. Performance Optimization : Monitor, troubleshoot, and optimize the performance of data systems and pipelines. Innovation : Stay up to date with the latest industry trends and emerging technologies to continuously improve data engineering & Qualifications : Experience : 5+ years in data engineering, software engineering, or a related field. Technical Proficiency Proficiency in working with relational databases (e.g. , PostgreSQL, MySQL) and NoSQL databases (e.g. , MongoDB, Cassandra). Familiarity with big data frameworks like Hadoop, Hive, Spark, Airflow, BigQuery, etc. Strong expertise in programming languages such as Python, NodeJS, SQL etc. Cloud Platforms : Advanced knowledge of cloud platforms (AWS, or GCP) and their associated data services. Data Warehousing : Expertise in modern data warehouses like BigQuery, Snowflake or Redshift, etc. Tools & Frameworks : Expertise in version control systems (e.g. , Git), CI/CD, JIRA pipelines. Big Data Ecosystems / BI : BigQuery, Tableau, LookerStudio. Industry Domain Knowledge : Google Analytics (GA), Hubspot, Accounting/Compliance etc. Soft Skills : Excellent problem-solving abilities, attention to detail, and strong communication Qualifications : Degree in Computer Science, Engineering, or a related field. Experience with real-time data streaming technologies (e.g. , Kafka, Kinesis). Familiarity with machine learning pipelines and tools. Knowledge of data security best practices and regulatory The Interview Process : The successful candidate will participate in the below interview stages (note that the order might be different to what you read below). We anticipate the process to last no more than 3 weeks from start to finish. Whether the interviews are held over video call or in person will depend on your location and the role. Case study. A : 60 minute chat with the Data Analyst, where they will give you some real-life challenges that this role faces, and will ask for your approach to solving them. Career deep dive. A : 60 minute chat with the Hiring Manager (COO). They'll discuss your last 1-2 roles to understand your experience in more detail. Behavioural fit assessment. A : 60 minute chat with our Head of HR or Head of Hiring, where they will dive into some of your recent work situations to understand how you think and work. Offer + reference interviews. We'll Make a Non-binding Offer Verbally Or Over Email, Followed By a Couple Of Short Phone Or Video Calls With References That You Provide To For Background Screening Please be aware that Sleek is a regulated entity and as such is required to perform different levels of background checks on staff depending on their role. This may include using external vendors to verify the below : Your education. Any criminal history. Any political exposure. Any bankruptcy or adverse credit history. We will ask for your consent before conducting these checks. Depending on your role at Sleek, an adverse result on one of these checks may prohibit you from passing probation. (ref:hirist.tech) Show more Show less
Posted 3 days ago
7.0 - 10.0 years
0 Lacs
Greater Kolkata Area
Remote
Job Title : Senior Data Scientist (Contract | Remote) Location : Remote Experience Required : 7 - 10 Years About The Role We are seeking a highly experienced Senior Data Scientist to join our team on a contract basis. This role is ideal for someone who excels in predictive analytics and has strong hands-on experience with Databricks and PySpark. You will play a key role in building and deploying scalable machine learning models, with a focus on regression, classification, and time-series forecasting. Key Responsibilities Design, build, and deploy predictive models using regression, classification, and time-series techniques. Develop and maintain scalable data pipelines using Databricks and PySpark. Leverage MLflow for experiment tracking and model versioning. Utilize Delta Lake for efficient data storage and version control. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Implement and manage CI/CD pipelines for model deployment. Work with cloud platforms such as Azure or AWS to develop and deploy ML solutions. Required Skills & Qualifications Minimum 7 years of experience in predictive analytics and machine learning. Strong expertise in Databricks, PySpark, MLflow, and Delta Lake. Proficiency in Python, Spark MLlib, and AutoML frameworks. Experience working with CI/CD pipelines for model deployment. Familiarity with Azure or AWS cloud services. Excellent problem-solving skills and ability to work : Prior experience in the Life Insurance or Property & Casualty (P&C) insurance domain. (ref:hirist.tech) Show more Show less
Posted 3 days ago
2.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About The Job We're Hiring : DevOps Engineer (2-5 Years Exp.) | Noida Location : Sector 158, Noida | On-site | Full-time Industry : Internet News | Media | Digital Are you a seasoned DevOps Engineer with 810 years of experience, ready to take ownership of large-scale infrastructure and cloud deployments? We're looking for a hands-on DevOps expert with strong experience in Google Cloud Platform (GCP) to lead our CI/CD pipelines, automate deployments, and manage a microservices-based infrastructure at scale. What Youll Do Own and manage CI/CD pipelines and infrastructure end-to-end. Architect and deploy scalable solutions on GCP (preferred). Streamline release cycles in coordination with QA, product, and engineering teams. Build containerized apps using Docker and manage them via Kubernetes. Use Terraform, Ansible, or equivalent tools for Infrastructure-as-Code (IAC). Monitor system performance and lead troubleshooting during production issues. Drive automation across infrastructure, monitoring, and alerts. Ensure microservices run securely and reliably. What Were Looking For 2 to 5 years of experience in DevOps or similar roles. Strong GCP (Google Cloud Platform) experience is mandatory. Hands-on with Docker, Kubernetes, Jenkins/GitLab CI/CD, Git/GitHub. Solid scripting knowledge (Shell, Python, etc. Familiarity with Node.js/React deployments. Experience with SQL/NoSQL DBs and tools like Elasticsearch, Spark, or Presto. Good understanding of secure development and InfoSec standards. Immediate joiners preferred (ref:hirist.tech) Show more Show less
Posted 3 days ago
2.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
DevOps Engineer (2 - 5 Years Exp.) | Noida Location : Sector 158, Noida | On-site Description : Are you a seasoned DevOps Engineer with 2-5 years of experience, ready to take ownership of large-scale infrastructure and cloud deployments? We're looking for a hands-on DevOps expert with strong experience in Google Cloud Platform (GCP) to lead our CI/CD pipelines, automate deployments, and manage a microservices-based infrastructure at scale. What Youll Do Own and manage CI/CD pipelines and infrastructure end-to-end. Architect and deploy scalable solutions on GCP (preferred). Streamline release cycles in coordination with QA, product, and engineering teams. Build containerized apps using Docker and manage them via Kubernetes. Use Terraform, Ansible, or equivalent tools for Infrastructure-as-Code (IAC). Monitor system performance and lead troubleshooting during production issues. Drive automation across infrastructure, monitoring, and alerts. Ensure microservices run securely and reliably. What Were Looking For 2 - 5 years of experience in DevOps or similar roles. Strong GCP (Google Cloud Platform) experience is mandatory. Hands-on with Docker, Kubernetes, Jenkins/GitLab CI/CD, Git/GitHub. Solid scripting knowledge (Shell, Python, etc. Familiarity with Node.js/React deployments. Experience with SQL/NoSQL DBs and tools like Elasticsearch, Spark, or Presto. Good understanding of secure development and InfoSec standards. Immediate joiners preferred (ref:hirist.tech) Show more Show less
Posted 3 days ago
10.0 - 15.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Job Title : Director AI Automation & Data Sciences Experience Required : 10- 15 Years Industry : Legal Technology / Cybersecurity / Data Science Department : Technology & Innovation About The Role We are seeking an exceptional Director AI Automation & Data Sciences to lead the innovation engine behind our Managed Document Review and Cyber Incident Response services. This is a senior leadership role where youll leverage advanced AI and data science to drive automation, scalability, and differentiation in service delivery. If you are a visionary leader who thrives at the intersection of technology and operations, this is your opportunity to make a global impact. Why Join Us Cutting-edge AI & Data Science technologies at your fingertips Globally recognized Cyber Incident Response Team Prestigious clientele of Fortune 500 companies and industry leaders Award-winning, inspirational workspaces Transparent, inclusive, and growth-driven culture Industry-best compensation that recognizes excellence Key Responsibilities (KRAs) Lead and scale AI & data science initiatives across Document Review and Incident Response programs Architect intelligent automation workflows to streamline legal review, anomaly detection, and threat analytics Drive end-to-end deployment of ML and NLP models into production environments Identify and implement AI use cases that deliver measurable business outcomes Collaborate with cross-functional teams including Legal Tech, Cybersecurity, Product, and Engineering Manage and mentor a high-performing team of data scientists, ML engineers, and automation specialists Evaluate and integrate third-party AI platforms and open-source tools for accelerated innovation Ensure AI models comply with privacy, compliance, and ethical AI principles Define and monitor key metrics to track model performance and automation ROI Stay abreast of emerging trends in generative AI, LLMs, and cybersecurity analytics Technical Skills & Tools Proficiency in Python, R, or Scala for data science and automation scripting Expertise in Machine Learning, Deep Learning, and NLP techniques Hands-on experience with LLMs, Transformer models, and Vector Databases Strong knowledge of Data Engineering pipelines ETL, data lakes, and real-time analytics Familiarity with Cyber Threat Intelligence, anomaly detection, and event correlation Experience with platforms like AWS SageMaker, Azure ML, Databricks, HuggingFace Advanced use of TensorFlow, PyTorch, spaCy, Scikit-learn, or similar frameworks Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for ML Ops Strong command of SQL, NoSQL, and big data tools (Spark, Kafka) Qualifications Bachelors or Masters in Computer Science, Data Science, AI, or a related field 10- 15 years of progressive experience in AI, Data Science, or Automation Proven leadership of cross-functional technology teams in high-growth environments Experience working in LegalTech, Cybersecurity, or related high-compliance industries preferred (ref:hirist.tech) Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Thane, Maharashtra, India
On-site
About The Company Implement Architecture and design from definition phase to go-live the Role : Work with the business analyst and SMEs to understand the current landscape priorities. Define conceptual and low-level model of using AI technology. Review design to make sure design is aligned with Architecture. Handson development of AI lead solution. Implement entire data pipeline of data crawling, ETL, creating Fact Tables, Data quality management etc. Integrate with multiple system using API or Web Services or data exchange mechanism. Build interfaces that gather data from various data sources such as: flat files, data extracts & incoming feeds from various data sources as well as directly interfacing with enterprise applications. Ensure that the solution is scalable, maintainable, and meet the best practices for security, performance and data management. Owning research assignments and development. Leading, developing and assisting developers & other team members. Collaborate, validate, and provide frequent updates to internal stakeholders throughout the project. Define and deliver against the solution benefits statement. Positively and constructively engage with clients and operations teams efforts where : Implement Architecture and design from definition phase to go-live phase. Work with the business analyst and SMEs to understand the current landscape priorities. Define conceptual and low-level model of using AI technology. Review design to make sure design is aligned with Architecture. Handson development of AI lead solution. Implement entire data pipeline of data crawling, ETL, creating Fact Tables, Data quality management etc. Integrate with multiple system using API or Web Services or data exchange mechanism. Build interfaces that gather data from various data sources such as: flat files, data extracts & incoming feeds from various data sources as well as directly interfacing with enterprise applications. Ensure that the solution is scalable, maintainable, and meet the best practices for security, performance and data management. Owning research assignments and development. Leading, developing and assisting developers & other team members. Collaborate, validate, and provide frequent updates to internal stakeholders throughout the project. Define and deliver against the solution benefits statement. Positively and constructively engage with clients and operations teams efforts where : A Bachelor's degree in Computer Science, Software Engineering, or a related Skills : Minimum 5 years of IT experience including 3+ years of experience as Full stack developer preferably using Python skills 2+ years of hands-on experience in Azure Data factory, Azure Databricks / Spark (familiarity with fabric), Azure Data Lake storage (Gen1/Gen2), Azure Synapse/SQL DW Expertise in designing/deploying data pipeline, from data crawling, ETL, Data warehousing, data applications on Azure Experienced in AI technology including: Machine Learning algorithms, Natural Language Processing, Deep Learning, Image Recognition, Speech Recognition etc. Proficient in programming languages like Python (Full Stack exposure) Proficient in dealing with all the layers in solution; multi-channel presentation, business logic in middleware, data access layer, RDBMS | NO-SQL; E.g. MySQL, MongoDB, Cassendra, SQL Server DBs Familiar with Vector DB such as: FAISS, CromaDB, PineCone, Weaveate, Feature Store Experience in implementing and deploying applications on Azure Proficient in creating technical documents like Architecture views, Technology Architecture blueprint and design specification Experienced in using tools like Rational suite, Enterprise Architect, Eclipse, and Source code versioning systems like Git Experience with different development methodologies (RUP | Scrum | Skills : None range and compensation package : None Opportunity Statement : Include a statement on commitment to diversity and inclusivity. (ref:hirist.tech) Show more Show less
Posted 3 days ago
8.0 - 10.0 years
0 Lacs
Greater Bengaluru Area
Remote
About ISOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is the Unified Marketing, Advertising, and Data Decision Intelligence Platform purpose-built to deliver speed to value for marketers. At MADTECH.AI, we make real-time AI-driven insights accessible to everyone. Whether you’re a global or emerging brand, agency, publisher, or data/tech provider, we give you a single source of truth - so you can capture sharper insights that drive better marketing decisions faster and more affordable than ever before. MADTECH.AI unifies and transforms MADTECH data and centralizes decision intelligence in a single, affordable platform. Leave data wrangling, data model building, proactive problem solving, and data visualization to MADTECH.AI. Job Description iSOCRATES is seeking a highly skilled and experienced Lead Data Scientist to spearhead our growing Data Science team. The Lead Data Scientist will be responsible for leading the team that defines, designs, reports on, and analyzes audience, campaign, and programmatic media trading data. This includes working with selected partner-focused Managed Services and Outsourced Services on behalf of our supply-side and demand-side partners. The role will involve collaboration with cross-functional teams and working across a variety of media channels, including digital and offline channels such as display, mobile, video, social, native, and advanced TV/Audio ad products. Key Responsibilities Team Leadership & Management: Lead and mentor a team of data scientists to drive the design, development, and implementation of data-driven solutions for media and marketing campaigns. Advanced Analytics & Data Science Expertise: Provide hands-on leadership in applying rigorous statistical, econometric, and Big Data methods to define requirements, design analytics solutions, analyze results, and optimize economic outcomes. Expertise in modeling techniques including propensity modeling, Media Mix Modeling (MMM), Multi-Touch Attribution (MTA), Recency, Frequency, Monetary (RFM) analysis, Bayesian statistics, and non-parametric methods. Generative AI & NLP: Lead the implementation and development of Generative AI, Large Language Models, and Natural Language Processing (NLP) techniques to enhance data modeling, prediction, and analysis processes. Data Architecture & Management: Architect and manage dynamic data systems from diverse sources, ensuring effective integration and optimization of audience, pricing, and contextual data for programmatic and digital advertising campaigns. Oversee the management of DSPs, SSPs, DMPs, and other data systems integral to the ad-tech ecosystem. Cross-Functional Collaboration: Work closely with Product, System Development, Yield, Operations, Finance, Sales, Business Development, and other teams to ensure seamless data quality, completeness, and predictive outcomes across campaigns. Design and deliver actionable insights, creating innovative, data-driven solutions and reporting tools for use by both iSOCRATES teams and business partners. Predictive Modeling & Optimization: Lead the development of predictive models and analyses to drive programmatic optimization, focusing on revenue, audience behavior, bid actions, and ad inventory optimization (eCPM, fill rate, etc.). Monitor and analyze campaign performance, making data-driven recommendations for optimizations across various media channels including websites, mobile apps, and social media platforms. Data Collection & Quality Assurance: Oversee the design, collection, and management of data, ensuring high-quality standards, efficient storage systems, and optimizations for in-depth analysis and visualization. Guide the implementation of tools for complex data analysis, model development, reporting, and visualization, ensuring alignment with business objectives. Qualifications Master’s or Ph.D. in Statistics, Engineering, Science, or Business with a strong foundation in mathematics and statistics. Looking for an experience of 8 to 10 years with at least 5 years of hands-on experience in data science, predictive analytics, media research, and digital analytics, with a focus on modeling, analysis, and optimization within the media, advertising, or tech industry. At least 3 years of hands-on experience with Generative AI, Large Language Models, and Natural Language Processing techniques. Minimum 3 years of experience in Publisher and Advertiser Audience Data Analytics and Modeling. Proficient in data collection, business intelligence, machine learning, and deep learning techniques using tools such as Python, R, scikit-learn, Hadoop, Spark, MySQL, and AWS S3. Expertise in logistic regression, customer segmentation, persona building, and predictive analytics. Strong analytical and data modeling skills with a deep understanding of audience behavior, pricing strategies, and programmatic media optimization. Experience working with DSPs, SSPs, DMPs, and programmatic systems. Excellent communication and presentation skills, with the ability to communicate complex technical concepts to non-technical stakeholders. Ability to manage multiple tasks and projects effectively, both independently and in collaboration with remote teams. Strong problem-solving skills with the ability to adapt to evolving business needs and deliver solutions proactively. Experience in developing analytics dashboards, visualization tools, and reporting systems. Background in digital media optimization, audience segmentation, and performance analytics. This is an exciting opportunity to take on a leadership role at the forefront of data science in the digital media and advertising space. If you have a passion for innovation, a strong technical background, and the ability to lead a team toward impactful, data-driven solutions, we encourage you to apply. An interest and ability to work in a fast-paced operation on the analytics and revenue side of our business Willing to relocate to Mysuru/ Bengaluru Show more Show less
Posted 3 days ago
50.0 years
0 Lacs
Delhi, India
On-site
About Gap Inc. Our past is full of iconic moments — but our future is going to spark many more. Our brands — Gap, Banana Republic, Old Navy and Athleta — have dressed people from all walks of life and all kinds of families, all over the world, for every occasion for more than 50 years. But we’re more than the clothes that we make. We know that business can and should be a force for good, and it’s why we work hard to make product that makes people feel good, inside and out. It’s why we’re committed to giving back to the communities where we live and work. If you're one of the super-talented who thrive on change, aren't afraid to take risks and love to make a difference, come grow with us. About The Role In this role, you will be accountable for the development process and strategy execution for the assigned product departments. You will also be responsible to execute the overall country and mill/vendor strategy for the department in partnership with the relevant internal teams. What You'll Do Manage the product / vendor development process (P2M) in a timely manner (development sampling, initial costs, negotiation/ production & capacity planning to meets the design aesthetic as well as commercially acceptable quality standards) Manage relationships with mills/vendors and support vendor allocation & aggregated costing along with overall capacity planning aligned to the cost targets to drive competitive advantage Partner with mills/vendors to drive innovation initiatives and superior quality while resolving any product and quality issues pro-actively Onboard new mills/vendors and provide training to existing mills/vendors along with supporting the evaluation process Look for opportunities for continuous improvement in product/vendor development, process management and overall sourcing procedures Able to communicate difficult concepts in a simple manner Participate in projects and assignments of diverse scope Who You Are Experience and knowledge of work specific to global product/vendor development and understands design, merchandising, and global sourcing landscape Ability to drive results through planning and prioritizing along with influencing others and providing recommendations & solutions Present problem analysis and recommended solutions in a creative and logical manner Benefits at Gap Inc. One of the most competitive paid time off plans in the industry Comprehensive health coverage for employees, same-sex partners and their families Health and wellness program: free annual health check-ups, fitness center and Employee Assistance Program Comprehensive benefits to support the journey of parenthood Retirement planning assistance See more of the benefits we offer. Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity. Show more Show less
Posted 3 days ago
12.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Skills: IB Curriculum, Lesson Planning, Literary Analysis, Classroom Management, Student Engagement, English, Essay Grading, IB English Faculty (DP Grades 9 to 12) Location: Gurgaon (1st month onsite) then Work From Home Salary: 78 LPA Work Days: 6 days/week Experience: 12 years Education: Must have BA & MA in English (Honours only) Not Another English Class. A Sparkl-ing Experience. Do you love teaching literature that makes teenagers think , not just memorize? Do you dream of taking students from Shakespeare to Arundhati Roy with purpose and passion? If yes, Sparkl is looking for you! Were hiring an IB English Faculty for DP (Grades 912) someone who brings strong academic grounding, school-teaching experience, and that extra spark that makes stories come alive. Who Were Looking For You must have taught English Literature in a formal school or tuition center (CBSE, ICSE, Cambridge, or IB preferred). Youve handled school curriculum (not vocational/entrance prep like SAT, TOEFL, SSC, CAT, etc). You have a Bachelors + Masters degree in English Honours no exceptions. You know how to explain literary devices, build essay-writing skills, and get teens talking about theme, tone, and character arcs. Youre confident, clear, and love working with high-schoolers. What You'll Be Doing Teach IB DP English for Grades 912 (focus on Literature, writing, comprehension). Guide students through critical analysis, essay structuring, and academic writing. Bring texts alive from Shakespeare to modern prose in ways students will remember. Begin with 1 month of in-person training at our Gurgaon office, then shift to remote work. Why Join Sparkl? Work with top mentors in the IB space Teach smart, curious, high-performing students Young, passionate team and a flexible work environment Real impact real growth Love Literature and Learning? Apply now and lets Sparkl together. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description: Data Engineer Role Overview The Data Engineer will be responsible for ensuring the availability, quality, and transformation of claims and operational data required for model development and integration. The role demands strong data pipeline design and engineering capabilities to support a scalable forecasting and capacity planning framework. Key Responsibilities Gather and process data from multiple sources including claims systems and operational databases. Build and maintain data pipelines to support segmentation and forecasting models. Ensure data integrity, transformation, and enrichment to align with modeling requirements. Collaborate with the Data Scientist to provide model-ready datasets. Support data versioning, storage, and automation for periodic refreshes. Assist in deployment/integration of data flows into operational dashboards or planning tools. Skills & Experience 5+ years of experience in data engineering or ETL development. Proficiency in SQL, Python, and data pipeline tools (e.g., Airflow, dbt, Spark, etc.). Experience with cloud-based data platforms (e.g., Azure, AWS, GCP). Understanding of data architecture and governance best practices. Prior experience working with insurance or operations-related data is a plus. Show more Show less
Posted 3 days ago
12.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Skills: IB Curriculum, Lesson Planning, Literary Analysis, Classroom Management, Student Engagement, English, Essay Grading, IB English Faculty (DP Grades 9 to 12) Location: Gurgaon (1st month onsite) then Work From Home Salary: 78 LPA Work Days: 6 days/week Experience: 12 years Education: Must have BA & MA in English (Honours only) Not Another English Class. A Sparkl-ing Experience. Do you love teaching literature that makes teenagers think , not just memorize? Do you dream of taking students from Shakespeare to Arundhati Roy with purpose and passion? If yes, Sparkl is looking for you! Were hiring an IB English Faculty for DP (Grades 912) someone who brings strong academic grounding, school-teaching experience, and that extra spark that makes stories come alive. Who Were Looking For You must have taught English Literature in a formal school or tuition center (CBSE, ICSE, Cambridge, or IB preferred). Youve handled school curriculum (not vocational/entrance prep like SAT, TOEFL, SSC, CAT, etc). You have a Bachelors + Masters degree in English Honours no exceptions. You know how to explain literary devices, build essay-writing skills, and get teens talking about theme, tone, and character arcs. Youre confident, clear, and love working with high-schoolers. What You'll Be Doing Teach IB DP English for Grades 912 (focus on Literature, writing, comprehension). Guide students through critical analysis, essay structuring, and academic writing. Bring texts alive from Shakespeare to modern prose in ways students will remember. Begin with 1 month of in-person training at our Gurgaon office, then shift to remote work. Why Join Sparkl? Work with top mentors in the IB space Teach smart, curious, high-performing students Young, passionate team and a flexible work environment Real impact real growth Love Literature and Learning? Apply now and lets Sparkl together. Show more Show less
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The demand for professionals with expertise in Spark is on the rise in India. Spark, an open-source distributed computing system, is widely used for big data processing and analytics. Job seekers in India looking to explore opportunities in Spark can find a variety of roles in different industries.
These cities have a high concentration of tech companies and startups actively hiring for Spark roles.
The average salary range for Spark professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
Salaries may vary based on the company, location, and specific job requirements.
In the field of Spark, a typical career progression may look like: - Junior Developer - Senior Developer - Tech Lead - Architect
Advancing in this career path often requires gaining experience, acquiring additional skills, and taking on more responsibilities.
Apart from proficiency in Spark, professionals in this field are often expected to have knowledge or experience in: - Hadoop - Java or Scala programming - Data processing and analytics - SQL databases
Having a combination of these skills can make a candidate more competitive in the job market.
As you explore opportunities in Spark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this growing field and advance your career in the tech industry. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.