Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Lakhtar, Gujarat, India
On-site
Job Requirements Safety Deliver Health & Safety objectives in line with company “Must Win Battles” and ensure that site procedures are strictly followed by the team and contractors in line with site/Company standards and safety improvement plans. Planning To set up and manage the Maintenance Department with a long term view of continuous improvement. To review internal performance and strategy in order to optimize the plant performance and efficiency. Predict the anticipated consumptions & purchasing requirements. Ensure adherence to Effective Maintenance Planning , preventive (SEF’s & WP) Co-ordinate maintenance engineers efforts to make sure machinery / equipment is kept up to reliability and condition standards Identify areas for improvement and assign resources /time to address Contact and schedule contract resources and extra resourcing as needed To formulate and establish optimum spares holding levels To formulate and establish annual budgets for department. To formulate and develop capital expenditure plans for maintenance / replacement future investment needs. Define the needs, forward purchasing requirements and liaise with purchasing department. Monitor and record the variance of all maintenance budgets Oversee the installation, testing, operation, maintenance, and repair of facilities and production equipment. MWBs/Core Values Staff. Ensure the Maintenance Department is adequately resourced to allow the maintenance Day / Shift schedules to be maintained at all times. Define, implement and sustain an effective Maintenance Organisation Manage the Maintenance team To provide tight control and coordination of the development of all engineers To enhance the workforce training development and skill levels Ensure employees receive the appropriate training, with the appropriate modules, including 5S, TPM, 6 sigma, Kaizen, OEE. To establish and define training needs; to coordinate the training to ensure trainer and trainee understand and know the expectation/requirement of the training activity To measure the value and effectiveness of training provision To ensure the trainer has the required skills to train. To provide direct training to trainee as required. Taking into account the needs for shift cover define the roster and crewing levels to allow all operations to operate on time. Define and coordinate any overtime or as appropriate, the use of temporary/ agency workers Coordinate the placing of temporary workers, as necessary, with the employment agencies and HR dept Recruit employees; assign, direct, and evaluate their work; and oversee the development and maintenance of staff competence. Lean & 5S Initiatives Development of Lean systems and structures to aid and facilitate efficient maintenance process. Development of modern management techniques (Lean systems, value stream mapping, Kanban etc). Decide the necessary corrective actions and implement them to achieve all KPIs. Define and implement suggestions to improve the OEE of each line/function. All audits and controls for systems of work are executed at the desired frequency. Collaborate with the other departments: Operations, Purchasing, Sales, Engineering. Develop and implement reliability systems including preventive and condition monitoring activities to improve plant reliability. Customer & Quality Support the Customer focused vision of the Company. Maintain the fundamentals; Quality System, ISO 9001:2015 , Environmental standards, Health & Safety standards. Actively participate in new product release process to ensure manufacturing is capable to achieve required specification and ensure ongoing Continuous Improvement / line efficiencies. Make sure quality assurance procedures are respected. Ensure non-conforming machinery / testing equipment are properly maintained. Take part in process improvement, equipment development and investments:- Technical Norm Performance and Reporting The manufacturing / engineering standards are respected . Monitor the maintenance engineers’ performance with regard to MRP and technical norm performance for downtime and yield and other KPIs. Monitor for incorrect performance reporting. Take all required actions to correct and then prevent inaccurate reporting. Establish rules and procedures for this. Organise the maintenance schedule to optimise manning / equipment / cost Ensure adequate personnel cover for all aspects of maintenance operations. In case of process drift, define corrective actions. Analyse the daily report (24 hours) and maintain management reporting protocols and reports Produce the required management reports. Review / report on KPI performance and identify areas to improve. Act upon these improvements via the maintenance team and other resources. Take into account all the KPI Indicators which the department impacts (workflow, spare parts, workshop, lubrication, breakdown management, equipment reliability etc) develop strategies for improvement and implement. Process and analyse the data; report on developments and findings. Propose corrective actions as necessary during Morning Meeting. Take decisions within his field of remit, while keeping plant manager informed of the activities. Ensure budget constraints are respected. Make and implement improvement proposals. Develop the practice of Continuous Improvement throughout areas of responsibility. Data and records Development of continuous improvement processes (OEE). Develop and maintain accurate written procedures for the department and ensure these are followed. Report on the performance losses / Break Downs and implement corrective actions. All equipment has appropriate records, manuals, certification, PUWER assessments etc. These records are kept up-to-date and are current. Ensure all modifications to plant and process equipment are recorded, approved and compliant with all standards, follow MOC (Management of Change) Process, follow internal and external regulations. Hygiene of Internal External Areas associated to your Responsibility & Waste Control The plant is a safe environment to work in The plant hygiene is maintained to high standards at all times. Change work ethic and culture towards a principle of self-starting and continuous improvement behaviour. Undertake regular plant tours. Ensure 5S & cleanliness procedures in the work shop are adhered to. React to any drift; liaise with shop floor to maintain standards. Direct and facilitate the resources to ensure standards are maintained. Ensure equipment is fit for purpose. Propose new measures to take away drift in behavior. Management Activity The company policies are distributed and explained to all personnel. Employees are competent and motivated. Any fall in standards is arrested and rectified immediately. Make improvement proposals. Give opinion on the performance of engineers. Propose sanctions. Motivate employees. Act as an interface role. Support Management. Participate within Plant Management – operations, quality meetings etc. Participate in the implementation of corrective actions. Others Develop and implement strategies that accelerate and improve current maintenance practices and processes to improve equipment performance, reliability and lower repair costs. Initiate, implement, and manage the plant maintenance programs based on best practices in our industry, with an emphasis on equipment condition inspections, planning/scheduling, high quality maintenance repairs, and safety, health & environmental policies and procedures. Analyze operational data and equipment performance history to deliver improvements in critical maintenance related metrics including: unplanned downtime, PM compliance, schedule compliance, Mean-Time-Between-Failures, and maintenance related costs. Coordinate with cross-functional departments (Engineering, R&D, Supply Chain, etc) to ensure operational effectiveness in life cycle cost considerations in equipment procurement activities. Develop and deliver comprehensive maintenance and reliability tactical training to maintenance resources. Leverage company subject matter experience/experts to advance current maintenance and reliability efforts through enhanced communication and best practice sharing by driving their application. Partner with worldwide operations group to coordinate maintenance activities in support of operational excellence. Regularly respond with advice to maintenance/equipment related questions, ensure access to up-to-date maintenance/operating procedures, and facilitate strong team communications activities. Establish, maintain, and leverage value from a computerized maintenance management system (CMMS) for tracking work orders, planned/predictive maintenance. Identify required equipment and process upgrades and effectively manage associated projects. Ensure accuracy in spare parts inventory and develop system as appropriate. Show more Show less
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description Build robust ML pipelines and automate model training, evaluation, and deployment. Optimize and tune models for financial time-series, pricing engines, and fraud detection. Collaborate with data scientists and data engineers to deploy scalable and secure ML models. Monitor model drift, data drift, and ensure models are retrained and updated as per regulatory norms. Implement CI/CD for ML and integrate with enterprise applications. Tech Stack Languages: Python ML Platforms: MLflow, Kubeflow MLOps Tools: Airflow, MLReef, Seldon Libraries: scikit-learn, XGBoost, LightGBM Cloud: GCP AI Platform Containerization: Docker, Kubernetes Job Category: AI/ML Engineer Job Type: Full Time Job Location: Mumbai Exp-Level: 3 to 5 Years Apply for this position Full Name * Email * Phone * Cover Letter * Upload CV/Resume *Allowed Type(s): .pdf, .doc, .docx By using this form you agree with the storage and handling of your data by this website. * Recent Comments Show more Show less
Posted 1 month ago
0.0 - 40.0 years
0 Lacs
Gurugram, Haryana
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. GenAI / AI Platform Architect Join Boston Scientific at the forefront of innovation as we embrace AI to transform healthcare and deliver cutting‑edge solutions. We are seeking an experienced GenAI / AI Platform Architect to define, build, and continuously improve a secure, governable, and enterprise‑grade Generative‑AI platform that underpins copilots, RAG search, intelligent document processing, agentic workflows, and other high‑value use cases. Your responsibilities will include: Own the reference architecture for GenAI: LLM hosting, vector DBs, orchestration layer, real‑time inference, and evaluation pipelines. Design and govern Retrieval‑Augmented Generation (RAG) pipelines—embedding generation, indexing, hybrid retrieval, and prompt assembly—for authoritative, auditable answers. Select and integrate toolchains (LangChain, LangGraph, LlamaIndex, MLflow, Kubeflow, Airflow) and ensure compatibility with cloud GenAI services (Azure OpenAI, Amazon Bedrock, Vertex AI). Implement MLOps / LLMOps: automated CI/CD for model fine‑tuning, evaluation, rollback, and blue‑green deployments; integrate model‑performance monitoring and drift detection. Embed “shift‑left” security and responsible‑AI guardrails—PII redaction, model‑output moderation, lineage logging, bias checks, and policy‑based access controls—working closely with CISO and compliance teams. Optimize cost‑to‑serve through dynamic model routing, context‑window compression, and GPU / Inferentia auto‑scaling; publish charge‑back dashboards for business units. Mentor solution teams on prompt engineering, agentic patterns (ReAct, CrewAI), and multi‑modal model integration (vision, structured data). Establish evaluation frameworks (e.g., LangSmith, custom BLEU/ROUGE/BERT‑Score pipelines, human‑in‑the‑loop) to track relevance, hallucination, toxicity, latency, and carbon footprint. Report KPIs (MTTR for model incidents, adoption growth, cost per 1k tokens) and iterate roadmap in partnership with product, data, and infrastructure leads. Required Qualifications: 10+ years designing cloud‑native platforms or AI/ML systems; 3+ years leading large‑scale GenAI, LLM, or RAG initiatives. Deep knowledge of LLM internals, fine‑tuning, RLHF, and agentic orchestration patterns (ReAct, Chain‑of‑Thought, LangGraph). Proven delivery on vector‑database architectures (Pinecone, Weaviate, FAISS, pgvector, Milvus) and semantic search optimization. Mastery of Python and API engineering; hands‑on with LangChain, LlamaIndex, FastAPI, GraphQL, gRPC. Strong background in security, governance, and observability across distributed AI services (IAM, KMS, audit trails, OpenTelemetry). Preferred Qualifications: Certifications: AWS Certified GenAI Engineer – Bedrock or Microsoft Azure AI Engineer Associate. Experience orchestrating multimodal models (images, video, audio) and streaming inference on edge devices or medical sensors. Published contributions to open‑source GenAI frameworks or white‑papers on responsible‑AI design. Familiarity with FDA or HIPAA compliance for AI solutions in healthcare. Demonstrated ability to influence executive stakeholders and lead cross‑functional tiger teams in a fast‑moving AI market. Requisition ID: 608452 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 1 month ago
0.6 - 2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Junior Project Manager — IDfy We don’t need experience; we need relentless execution. Role Overview: Support project delivery by coordinating teams, tracking progress, and clearing roadblocks. Learn on the job. Deliver on time. Own your part like a pro. Key Responsibilities: Assist in planning and executing projects under senior guidance. Communicate effectively with cross-functional teams to keep projects moving. Monitor timelines and raise flags early when things drift off course. Manage project documentation and action items with discipline. Participate in meetings, track decisions, and drive follow-ups. Learn project management tools and frameworks on the job. Adapt quickly and maintain urgency in a fast-paced environment. Qualifications: Bachelor’s degree in any field. 0.6-2 years of experience; willingness to learn is non-negotiable. Strong organizational and communication skills. Proactive, detail-oriented, and accountable. Comfortable working under pressure and managing multiple priorities. Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
Remote
About the Role You’ll join a small, fast team turning cutting-edge AI research into shippable products across text, vision, and multimodal domains. One sprint you’ll be distilling an LLM for WhatsApp chat-ops; the next you’ll be converting CAD drawings to BOM stories, or training a computer-vision model that flags onsite safety risks. You own the model life-cycle end-to-end: data prep ➞ fine-tune/distil ➞ evaluate ➞ deploy ➞ monitor. Key Responsibilities Model Engineering • Fine-tune and quantise open-weight LLMs (Llama 3, Mistral, Gemma) and SLMs for low-latency edge inference. • Train or adapt computer-vision models (YOLO, Segment Anything, SAM-DINO) to detect site hazards, drawings anomalies, or asset states. Multimodal Pipelines • Build retrieval-augmented-generation (RAG) stacks: loaders → vector DB (FAISS / OpenSearch) → ranking prompts. • Combine vision + language outputs into single “scene → story” responses for dashboards and WhatsApp bots. Serving & MLOps • Package models as Docker images, SageMaker endpoints, or ONNX edge bundles; expose FastAPI/GRPC handlers with auth, rate-limit, telemetry. • Automate CI/CD: GitHub Actions → Terraform → blue-green deploys. Evaluation & Guardrails • Design automatic eval harnesses (BLEU, BERTScore, CLIP similarity, toxicity & bias checks). • Monitor drift, hallucination, latency; implement rollback triggers. Enablement & Storytelling • Write prompt playbooks & model cards so other teams can reuse your work. • Run internal workshops: “From design drawing to narrative” / “LLM safety by example”. Required Skills & Experience 3+ yrs ML/NLP/CV in production; at least 1 yr hands-on with Generative AI . Strong Python (FastAPI, Pydantic, asyncio) and HuggingFace Transformers OR diffusers . Experience with minimal-footprint models (LoRA, QLoRA, GGUF, INT-4) and vector search. Comfortable on AWS/GCP/Azure for GPU instances, serverless endpoints, IaC. Solid grasp of evaluation/guardrail frameworks (Helm, PromptLayer, Guardrails-AI, Triton metrics). Bonus Points Built a RAG or function-calling agent used by 500+ users. Prior CV pipeline (object-detection, segmentation) or speech-to-text real-time project. Live examples of creative prompt engineering or story-generation. Familiarity with LangChain, LlamaIndex, or BentoML. Why You’ll Love It Multidomain playground – text, vision, storytelling, decision-support. Tech freedom – pick the right model & stack; justify it; ship it. Remote-first – work anywhere ±4 hrs of IST; quarterly hack-weeks in Hyderabad. Top-quartile pay – base + milestone bonus + conference stipend. How to Apply Send a resume and link to GitHub / HF / Kaggle showcasing LLM or CV work. Include a 200-word note describing your favourite prompt or model tweak and the impact it had. Short-listed candidates complete a practical take-home (fine-tune tiny model, build RAG or vision demo, brief write-up) and a 45-min technical chat. We hire builders, not resume keywords. Show us you can ship AI that works in the real world—and explain it clearly—and you’re in. Show more Show less
Posted 1 month ago
9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Job Description : We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze → silver → gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less
Posted 1 month ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Description: The AI/ML engineer role requires a blend of expertise in machine learning operations (MLOps), ML Engineering, Data Science, Large Language Models (LLMs), and software engineering principles. Skills you'll need to bring: Experience building production-quality ML and AI systems. Experience in MLOps and real-time ML and LLM model deployment and evaluation. Experience with RAG frameworks and Agentic workflows valuable. Proven experience deploying and monitoring large language models (e.g., Llama, Mistral, etc.). Improve evaluation accuracy and relevancy using creative, cutting-edge techniques from both industry and new research Solid understanding of real-time data processing and monitoring tools for model drift and data validation. Knowledge of observability best practices specific to LLM outputs, including semantic similarity, compliance, and output quality. Strong programming skills in Python and familiarity with API-based model serving. Experience with LLM management and optimization platforms (e.g., LangChain, Hugging Face). Familiarity with data engineering pipelines for real-time input-output logging and analysis. Qualifications: Experience working with common AI-related models, frameworks and toolsets like LLMs, Vector Databases, NLP, prompt engineering and agent architectures. Experience in building AI and ML solutions. Strong software engineering skills for the rapid and accurate development of AI models and systems. Prominent in programming language like Python. Hands-on experience with technologies like Databricks, and Delta Tables. Broad understanding of data engineering (SQL, NoSQL, Big Data), Agile, UX, Cloud, software architecture, and ModelOps/MLOps. Experience in CI/CD and testing, with experience building container-based stand-alone applications using tools like GitHub, Jenkins, Docker and Kubernetes Responsibilities: Participate in research and innovation of data science projects that have impact to our products and customers globally. Apply ML expertise to train models, validates the accuracy of the models, and deploys the models at scale to production. Apply best practices in MLOps, LLMOps, Data Science, and software engineering to ensure the delivery of clean, efficient, and reliable code. Aggregate huge amounts of data from disparate sources to discover patterns and features necessary to automate the analytical models. About Company Improva is a global IT solution provider and outsourcing company with contributions across several domains including FinTech, Healthcare, Insurance, Airline, Ecommerce & Retail, Logistics, Education, Insurance, Startups, Government & Semi-Government, and more. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
We are united in our mission to make a positive impact on healthcare. Join Us! South Florida Business Journal, Best Places to Work 2024 Inc. 5000 Fastest-Growing Private Companies in America 2024 2024 Black Book Awards, ranked #1 EHR in 11 Specialties 2024 Spring Digital Health Awards, “Web-based Digital Health” category for EMA Health Records (Gold) 2024 Stevie American Business Award (Silver), New Product and Service: Health Technology Solution (Klara) Who We Are We Are Modernizing Medicine (WAMM)! We’re a team of bright, passionate, and positive problem-solvers on a mission to place doctors and patients at the center of care through an intelligent, specialty-specific cloud platform. Our vision is a world where the software we build increases medical practice success and improves patient outcomes. Founded in 2010 by Daniel Cane and Dr. Michael Sherling, we have grown to over 3400 combined direct and contingent team members serving eleven specialties, and we are just getting started! ModMed is based in Boca Raton, FL, with office locations in Santiago, Chile, Berlin, Germany, Hyderabad, India, and a robust remote workforce with team members across the US. ModMed is hiring a driven ML Ops Engineer 2 to join our positive, passionate, and high-performing team focused on scalable ML Systems. This is an exciting opportunity to You as you will collaborate with data scientists, engineers, and other cross-functional teams to ensure seamless model deployment, monitoring, and automation. If you're passionate about cloud infrastructure, automation, and optimizing ML pipelines, this is the role for you within a fast-paced Healthcare IT company that is truly Modernizing Medicine! Key Responsibilities Model Deployment & Automation: Develop, deploy, and manage ML models on Databricks using MLflow for tracking experiments, managing models, and registering them in a centralized repository. Infrastructure & Environment Management: Set up scalable and fault-tolerant infrastructure to support model training and inference in cloud environments such as AWS, GCP, or Azure. Monitoring & Performance Optimization: Implement monitoring systems to track model performance, accuracy, and drift over time. Create automated systems for re-training and continuous learning to maintain optimal performance. Data Pipeline Integration: Collaborate with the data engineering team to integrate model pipelines with real-time and batch data processing frameworks, ensuring seamless data flow for training and inference. Skillset & Qualification Model Deployment: Experience with deploying models in production using cloud platforms like AWS Sagemaker, GCP AI Platform, or Azure ML Studio. Version Control & Automation: Experience with MLOps tools such as MLflow, Kubeflow, or Airflow to automate and monitor the lifecycle of machine learning models. Cloud Expertise: Experience with cloud-based machine learning services on AWS, Google Cloud, or Azure, ensuring that models are scalable and efficient. Engineers must be skilled in measuring and optimizing model performance through metrics like AUC, precision, recall, and F1-score, ensuring that models are robust and reliable in production settings. Education: Bachelor’s or Master’s degree in Data Science, Statistics, Mathematics, or a related technical field. ModMed In India Benefit Highlights High growth, collaborative, transparent, fun, and award-winning culture Comprehensive benefits package including medical for you, your family, and your dependent parents The company supported community engagement opportunities along with a paid Voluntary Time Off day to use for volunteering in your community of interest Global presence, and in-person collaboration opportunities; dog-friendly HQ (US), Hybrid office-based roles and remote availability Company-sponsored Employee Resource Groups that provide engaged and supportive communities within ModMed ModMed Benefits Highlight: At ModMed, we believe it’s important to offer a competitive benefits package designed to meet the diverse needs of our growing workforce. Eligible Modernizers can enroll in a wide range of benefits: India Meals & Snacks: Enjoy complimentary office lunches & dinners on select days and healthy snacks delivered to your desk, Insurance Coverage: Comprehensive health, accidental, and life insurance plans, including coverage for family members, all at no cost to employees, Allowances: Annual wellness allowance to support your well-being and productivity, Earned, casual, and sick leaves to maintain a healthy work-life balance, Bereavement leave for difficult times and extended medical leave options, Paid parental leaves, including maternity, paternity, adoption, surrogacy, and abortion leave, Celebration leave to make your special day even more memorable, and company-paid holidays to recharge and unwind. United States Comprehensive medical, dental, and vision benefits, including a company Health Savings Account contribution, 401(k): ModMed provides a matching contribution each payday of 50% of your contribution deferred on up to 6% of your compensation. After one year of employment with ModMed, 100% of any matching contribution you receive is yours to keep. Generous Paid Time Off and Paid Parental Leave programs, Company paid Life and Disability benefits, Flexible Spending Account, and Employee Assistance Programs, Company-sponsored Business Resource & Special Interest Groups that provide engaged and supportive communities within ModMed, Professional development opportunities, including tuition reimbursement programs and unlimited access to LinkedIn Learning, Global presence and in-person collaboration opportunities; dog-friendly HQ (US), Hybrid office-based roles and remote availability for some roles, Weekly catered breakfast and lunch, treadmill workstations, Zen, and wellness rooms within our BRIC headquarters. PHISHING SCAM WARNING: ModMed is among several companies recently made aware of a phishing scam involving imposters posing as hiring managers recruiting via email, text and social media. The imposters are creating misleading email accounts, conducting remote "interviews," and making fake job offers in order to collect personal and financial information from unsuspecting individuals. Please be aware that no job offers will be made from ModMed without a formal interview process, and valid communications from our hiring team will come from our employees with a ModMed email address (first.lastname@modmed.com). Please check senders’ email addresses carefully. Additionally, ModMed will not ask you to purchase equipment or supplies as part of your onboarding process. If you are receiving communications as described above, please report them to the FTC website. Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
India
Remote
🚀 About the Role We're looking for a relentlessly curious Data Engineer to join our team as a Marketing Platform Specialist. This role is for someone who will dedicate themselves to extracting every possible byte of data from marketing platforms - the kind of person who gets excited about discovering hidden API endpoints and undocumented features. You'll craft pristine, high-resolution datasets in Snowflake that fuel advanced analytics and machine learning across the business. 🎯 The Mission Your singular focus: extract every drop of value from the world’s most powerful marketing platforms. Where others use 10% of a tool’s capabilities, you’ll uncover the hidden 90%, from granular auction insights to ephemeral algorithm data. You’ll build the intelligence layer that enables others to scale smarter, faster, and with precision. 🧪 What You’ll Actually Do Platform Data Extraction & Monitoring Reverse-engineer APIs across Meta, Google, TikTok, and others to extract hidden attribution data, auction signals, and algorithmic behavior Exploit beta features and undocumented endpoints to unlock advanced data streams Capture ephemeral data before it disappears: attribution snapshots, pacing drift, algorithm shifts Build real-time monitoring datasets to detect anomalies, pacing issues, and creative decay Scrape competitor websites and dissect tracking logic to reveal platform strategies Business Scaling & Optimization Datasets Build granular spend and performance datasets with dayparting, marginal ROAS, and cost efficiency metrics Develop lookalike audience models enriched with seed performance, overlap scores, and fatigue indicators Create auction intelligence datasets with hour/geo/placement granularity, bid behaviors, and overlap tracking Design optimization datasets for portfolio performance, campaign cannibalization, and creative lifecycle decay Extract machine learning signals from Meta Advantage+, Smart Bidding configs, and TikTok optimization logs Build competitive intelligence datasets with SOV trends, auction pressure, and creative benchmarks Advanced Feature & AI Data Engineering Extract structured data from advanced features like predictive analytics, customer match, and A/B testing tools Build multimodal datasets (ad copy, landing pages, video) ready for machine learning workflows Enrich historical marketing data using automated pipelines and AI-powered data cleaning Unified Customer & Attribution Data Build comprehensive GA4 datasets using precise tagging logic and event architecture Unify data from Firebase, Klaviyo, Tealium, and offline sources into full-funnel CDP datasets Engineer identity resolution pipelines using hashed emails, device IDs, and privacy-safe matching Map cross-device customer journeys with detailed attribution paths and timestamp precision Snowflake Architecture & Analytics Enablement Design and maintain scalable, semantic-layered Snowflake datasets with historical tracking and versioning Use S3 and data lakes to manage large-scale, unstructured data across channels Implement architectures suited for exploration, BI, real-time streaming, and ML modeling, including star schema, medallion, and Data Vault patterns Build dashboards and tools that reveal inefficiencies, scaling triggers, and creative performance decay 🎓 Skills & Experience Required 3+ years as a Data Engineer with deep experience in MarTech or growth/performance marketing Advanced Python for API extraction, automation, and orchestration JavaScript proficiency for tracking and GTM customization Expert-level experience with GA4 implementation and data handling Hands-on experience with Snowflake, including performance tuning and scaling Comfortable working with S3 and data lakes for semi/unstructured data Strong SQL and understanding of scalable data models 🤞Bonus Points if You Have Experience with dbt for transformation and modeling CI/CD pipelines using GitHub Actions or similar Knowledge of vector databases (e.g., Pinecone, Weaviate) for AI/ML readiness Familiarity with GPU computing for high-performance data workflows Data app development using R Shiny or Streamlit 🚀 You'll Excel Here If You Are: Relentlessly curious : You don’t wait for someone to show you the API docs; you find the endpoints yourself Detail-obsessed : You notice the subtle changes, the disappearing fields, the data drift others overlook Self-directed: You don’t need micromanagement. Just give you the mission, and you’ll reverse-engineer the map Comfortable with ambiguity: You can navigate undocumented features, partial datasets, and platform quirks without panic Great communicator: You explain technical decisions clearly, with just enough detail for analysts, marketers, and fellow engineers to move forward Product-minded: You think in terms of impact, not just pipelines. Every dataset you build is a stepping stone to better strategy, smarter automation, or faster growth 🔥Why the Conqueror: ⭐️Shape the data infrastructure powering real business growth 💡Join a purpose-driven, fast-moving team 📈 Work with fast-scaling e-commerce brands 🧠Personal development budget to continuously sharpen your skills 🏠Work remotely from anywhere 💼 3000 - 4200 euros Gross Salary/month 💛 About us We are a growing team of passionate, performance-driven individuals on a mission to be the best at growing multiple e-commerce businesses with great products. For the past 7 years, we’ve gathered a powerful community of over 1 million people globally, empowering people to build and sustain healthy habits in an enjoyable way. Our digital and physical products, the Conqueror Challenges App and epic medals, have helped users walk, run, cycle, swim, and move through the virtual equivalents of iconic routes. We’ve partnered with Warner Bros., Disney, and others to launch global hits like THE LORD OF THE RINGS™, HARRY POTTER™, and STAR WARS™ virtual challenges. Now, we’re stepping into an exciting new chapter! While continuing to grow our core business, we’re actively acquiring and building new e-commerce brands. We focus on using our marketing and operational strengths to scale these businesses, striving to always be outstanding in everything we do and delivering more value to more people. Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
India
Remote
The Market Development Programs team at Red Hat is seeking a skilled Reporting Analyst to enhance our analytics and reporting capabilities, supporting global and regional stakeholders across the Market Development organization. In this pivotal role, you will create and manage Salesforce reports, CRM Analytics (CRMA) analytical reports, and Tableau dashboards, delivering insightful data for strategic decision-making and operational excellence. Operating in a dynamic, results-driven environment, you'll work closely with cross-functional teams and senior leadership to provide accurate, actionable insights crucial for Quarterly Business Reviews (QBRs), regional meetings, and other essential reporting needs. This role reports directly to the Manager of Market Development Programs and offers an opportunity to significantly influence the organization's analytical practices and performance reporting. What Will You Do Design, develop, and maintain comprehensive Salesforce reports to support day-to-day business operations and performance measurement Create advanced CRM Analytics (CRMA) reports and dashboards, providing detailed analysis to facilitate informed decision-making at global and regional levels Develop interactive and insightful Tableau dashboards that clearly visualize key performance metrics and trends Collaborate closely with stakeholders across various regions and functions to gather requirements and deliver tailored reporting solutions Provide analytical support and key data insights for Quarterly Business Reviews (QBRs), regional meetings, and ad-hoc reporting requests Ensure data accuracy, consistency, and timeliness across all reporting deliverables, maintaining the highest standards of quality Identify opportunities to streamline reporting processes, enhance efficiency, and drive continuous improvement within reporting and analytics practices Act as a trusted advisor to stakeholders, proactively identifying trends, insights, and recommendations based on data analysis What Will You Bring Bachelor’s degree in Business, Analytics, Information Systems, or a related field Minimum of 3-5 years experience in a reporting or analytics role, preferably within sales, marketing, or related operational teams Proven expertise in creating and managing Salesforce reports and dashboards Solid experience with CRM Analytics (CRMA), developing detailed analytical reports and insights Proficiency with Tableau or similar data visualization tools; demonstrated capability in creating intuitive and impactful dashboards Strong analytical and problem-solving skills, with an ability to interpret complex data and translate insights into actionable business recommendations Excellent communication and interpersonal skills, able to effectively engage with diverse, global stakeholder groups High level of attention to detail, accuracy, and commitment to delivering high-quality work under tight deadlines Ability to thrive in a fast-paced, agile environment, adapting swiftly to shifting priorities Familiarity with sales and marketing systems such as Outreach, Drift, Marketo, and other related platforms is an added advantage About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply. Show more Show less
Posted 1 month ago
2.0 - 5.0 years
6 Lacs
Hyderābād
On-site
Must-Have Skills & Traits Core Engineering Advanced Python skills with a strong grasp of clean, modular, and maintainable code practices Experience building production-ready backend services using frameworks like FastAPI, Flask, or Django Strong understanding of software architecture , including RESTful API design, modularity, testing, and versioning. Experience working with databases (SQL/NoSQL), caching layers, and background job queues. AI/ML & GenAI Expertise Hands-on experience with machine learning workflows: data preprocessing, model training, evaluation, and deployment Practical experience with LLMs and GenAI tools such as OpenAI APIs, Hugging Face, LangChain, or Transformers Understanding of how to integrate LLMs into applications through prompt engineering, retrieval-augmented generation (RAG), and vector search Comfortable working with unstructured data (text, images) in real-world product environments Bonus: experience with model fine-tuning, evaluation metrics, or vector databases like FAISS, Pinecone, or Weaviate Ownership & Execution Demonstrated ability to take full ownership of features or modules from architecture to delivery Able to work independently in ambiguous situations and drive solutions with minimal guidance Experience collaborating cross-functionally with designers, PMs, and other engineers to deliver user-focused solutions Strong debugging, systems thinking, and decision-making skills with an eye toward scalability and performance Nice-to-Have Skills Experience in startup or fast-paced product environments. 2-5 years of relevant experience. Familiarity with asynchronous programming patterns in Python. Exposure to event-driven architecture and tools such as Kafka, RabbitMQ, or AWS EventBridge Data science exposure: exploratory data analysis (EDA), statistical modeling, or experimentation Built or contributed to agentic systems, ML/AI pipelines, or intelligent automation tools Understanding of MLOps: model deployment, monitoring, drift detection, or retraining pipelines Frontend familiarity (React, Tailwind) for prototyping or contributing to full-stack features
Posted 1 month ago
6.0 years
0 Lacs
India
Remote
Who we are We're a leading, global security authority that's disrupting our own category. Our encryption is trusted by the major ecommerce brands, the world's largest companies, the major cloud providers, entire country financial systems, entire internets of things and even down to the little things like surgically embedded pacemakers. We help companies put trust - an abstract idea - to work. That's digital trust for the real world. Job summary As a DevOps Engineer, you will play a pivotal role in designing, implementing, and maintaining our infrastructure and deployment processes. You will collaborate closely with our development, operations, and security teams to ensure seamless integration of code releases, infrastructure automation, and continuous improvement of our DevOps practices. This role places a strong emphasis on infrastructure as code with Terraform, including module design, remote state management, policy enforcement, and CI/CD integration. You will manage authentication via Auth0, maintain secure network and identity configurations using AWS IAM and Security Groups, and oversee the lifecycle and upgrade management of AWS RDS and MSK clusters. Additional responsibilities include managing vulnerability remediation, containerized deployments via Docker, and orchestrating production workloads using AWS ECS and Fargate. What you will do Design, build, and maintain scalable, reliable, and secure infrastructure solutions on cloud platforms such as AWS, Azure, or GCP. Implement and manage continuous integration and continuous deployment (CI/CD) pipelines for efficient and automated software delivery. Develop and maintain infrastructure as code (IaC) — with a primary focus on Terraform — including building reusable, modular, and parameterized modules for scalable infrastructure. Securely manage Terraform state using remote backends (e.g., S3 with DynamoDB locks) and establish best practices for drift detection and resolution. Integrate Terraform into CI/CD pipelines with automated plan, apply, and policy-check gating Conduct testing and validation of Terraform code using tools such as Terratest, Checkov, or equivalent frameworks. Design and manage network infrastructure, including VPCs, subnets, routing, NAT gateways, and load balancers. Configure and manage AWS IAM roles, policies, and Security Groups to enforce least-privilege access control and secure application environments. Administer and maintain Auth0 for user authentication and authorization, including rule scripting, tenant settings, and integration with identity providers. Build and manage containerized applications using Docker, deployed through AWS ECS and Fargate for scalable and cost-effective orchestration. Implement vulnerability management workflows, including image scanning, patching, dependency management, and CI-integrated security controls. Manage RDS and MSK infrastructure, including lifecycle and version upgrades, high availability setup, and performance tuning. Monitor system health, performance, and capacity using tools like Prometheus, ELK, or Splunk; proactively resolve bottlenecks and incidents. Collaborate with development and security teams to resolve infrastructure issues, streamline delivery, and uphold compliance. What you will have Bachelor's degree in Computer Science, Engineering, or related field, or equivalent work experience. 6+ years in DevOps or similar role, with strong experience in infrastructure architecture and automation. Advanced proficiency in Terraform, including module creation, backend management, workspaces, and integration with version control and CI/CD. Experience with remote state management using S3 and DynamoDB, and implementing Terraform policy-as-code with OPA/Sentinel. Familiarity with Terraform testing/validation tools such as Terratest, InSpec, or Checkov. Strong background in cloud networking, VPC design, DNS, and ingress/egress control. Proficient with AWS IAM, Security Groups, EC2, RDS, S3, Lambda, MSK, and ECS/Fargate. Hands-on experience with Auth0 or equivalent identity management platforms. Proficient in container technologies like Docker, with production deployments via ECS/Fargate. Solid experience in vulnerability and compliance management across the infrastructure lifecycle. Skilled in scripting (Python, Bash, PowerShell) for automation and tooling development. Experience in monitoring/logging using Prometheus, ELK stack, Grafana, or Splunk. Excellent troubleshooting skills in cloud-native and distributed systems. Effective communicator and cross-functional collaborator in Agile/Scrum environments. Nice to have Terraform (Intermediate) • AWS (IAM, Security Groups, RDS, MSK, ECS/Fargate, Cloudwatch) • Docker • CI/CD (GitLab, Jenkins) • Auth0 • Python/Bash Benefits Generous time off policies Top shelf benefits Education, wellness and lifestyle support Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role name: Automation Test Lead Years of exp : 5 - 8 yrs About Dailoqa Dailoqa’s mission is to bridge human expertise and artificial intelligence to solve the challenges facing financial services. Our founding team of 20+ international leaders, including former CIOs and senior industry experts, combines extensive technical expertise with decades of real-world experience to create tailored solutions that harness the power of combined intelligence. With a focus on Financial Services clients, we have deep expertise across Risk & Regulations, Retail & Institutional Banking, Capital Markets, and Wealth & Asset Management. Dailoqa has global reach in UK, Europe, Africa, India, ASEAN, and Australia. We integrate AI into business strategies to deliver tangible outcomes and set new standards for the financial services industry. Working at Dailoqa will be hard work, our environment is fluid and fast-moving and you'll be part of a community that values innovation, collaboration, and relentless curiosity. We’re looking at people who: Are proactive, curious adaptable, and patient Shape the company's vision and will have a direct impact on its success. Have the opportunity for fast career growth. Have the opportunity to participate in the upside of an ultra-growth venture. Have fun 🙂 Don’t apply if: You want to work on a single layer of the application. You prefer to work on well-defined problems. You need clear, pre-defined processes. You prefer a relaxed and slow paced environment. Role Overview As an Automation Test Lead at Dailoqa, you’ll architect and implement robust testing frameworks for both software and AI/ML systems. You’ll bridge the gap between traditional QA and AI-specific validation, ensuring seamless integration of automated testing into CI/CD pipelines while addressing unique challenges like model accuracy, GenAI output validation, and ethical AI compliance. Key Responsibilities Test Automation Strategy & Framework Design Design and implement scalable test automation frameworks for frontend (UI/UX), backend APIs, and AI/ML model-serving endpoints using tools like Selenium, Playwright, Postman, or custom Python/Java solutions. Build GenAI-specific test suites for validating prompt outputs, LLM-based chat interfaces, RAG systems, and vector search accuracy. Develop performance testing strategies for AI pipelines (e.g., model inference latency, resource utilization). Continuous Testing & CI/CD Integration Establish and maintain continuous testing pipelines integrated with GitHub Actions, Jenkins, or GitLab CI/CD. Implement shift-left testing by embedding automated checks into development workflows (e.g., unit tests, contract testing). AI/ML Model Validation Collaborate with data scientists to test AI/ML models for accuracy, fairness, stability, and bias mitigation using tools like TensorFlow Model Analysis or MLflow. Validate model drift and retraining pipelines to ensure consistent performance in production. Quality Metrics & Reporting Define and track KPIs: Test coverage (code, data, scenarios) Defect leakage rate Automation ROI (time saved vs. maintenance effort) Model accuracy thresholds Report risks and quality trends to stakeholders in sprint reviews. Drive adoption of AI-specific testing tools (e.g., LangChain for LLM testing, Great Expectations for data validation). Technical Requirements Must-Have 5–8 years in test automation, with 2+ years validating AI/ML systems. Expertise in: Automation tools: Selenium, Playwright, Cypress, REST Assured, Locust/JMeter CI/CD: Jenkins, GitHub Actions, GitLab AI/ML testing: Model validation, drift detection, GenAI output evaluation Languages: Python, Java, or JavaScript Certifications: ISTQB Advanced, CAST, or equivalent. Experience with MLOps tools: MLflow, Kubeflow, TFX Familiarity with vector databases (Pinecone, Milvus) and RAG workflows. Strong programming/scripting experience in JavaScript, Python, Java, or similar Experience with API testing, UI testing, and automated pipelines Understanding of AI/ML model testing, output evaluation, and non-deterministic behavior validation Experience with testing AI chatbots, LLM responses, prompt engineering outcomes, or AI fairness/bias Familiarity with MLOps pipelines and automated validation of model performance in production Exposure to Agile/Scrum methodology and tools like Azure Boards Soft Skills Strong problem-solving skills for balancing speed and quality in fast-paced AI development. Ability to communicate technical risks to non-technical stakeholders. Collaborative mindset to work with cross-functional teams (data scientists, ML engineers, DevOps). Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Sr Director/ VP AI & Machine Learning – Strategy Overview The next evolution of AI-powered cyber defense is here. With the rise of cloud and modern technologies, organizations struggle with the vast amount of data and thereby security alerts generated by their existing security tools. Cyberattacks continue to get more sophisticated and harder to detect in the sea of alerts and false positives. According to the Forrester 2023 Enterprise Breach Benchmark Report, a security breach costs organizations an average of $3M and takes organizations over 200 days to investigate and respond. AiStrike’s platform aims to reduce the time to investigate and respond to threats by over 90%. Our approach is to leverage the power of AI and machine learning to adopt an attacker mindset to prioritize and automate cyber threat investigation and response. The platform reduces alerts by 100:5 and provides detailed context and link analysis capabilities to investigate the alert. The platform also provides collaborative workflow and no code automation to cut down the time to respond to threats significantly. We are looking for a forward-thinking Leader for AI to define and lead the AI and ML strategy for our next-generation cybersecurity platform. This role sits at the intersection of data science, cybersecurity operations, and product innovation, responsible for transforming security telemetry into intelligent workflows, automated decisions, and self-improving systems. You will lead the vision and execution for how classification, clustering, correlation, and feedback loops are built into our AI-powered threat investigation and response engine. Your work will directly impact how analysts investigate alerts, how automation adapts over time, and how customers operationalize AI safely and effectively in high-stakes security environments. Key Responsibilities ● Define the AI Strategy & Roadmap: Own and drive the strategic direction for AI/ML across investigation, prioritization, alert triage, and autonomous response. ● Architect Feedback-Driven AI Systems: Design scalable feedback loops where analyst input, alert outcomes, and system performance continuously refine models. ● Operationalize ML for Security: Work with detection engineering, platform, and data teams to apply clustering, classification, and anomaly detection on massive datasets—logs, alerts, identities, cloud events—not images or media. ● Guide Complex Security Workflows: Translate noisy, high-volume telemetry into structured workflows powered by AI—spanning enrichment, correlation, and decisioning. ● Collaborate Across Functions: Partner with product managers, detection engineers, threat researchers, and ML engineers to define use cases, data needs, and modeling approaches. ● Ensure Explainability and Trust: Prioritize model transparency, accuracy, and control—enabling human-in-the-loop or override in high-risk environments. ● Lead AI Governance and Deployment Frameworks: Define policies, versioning, validation, and release processes for customer-safe AI usage in production environments. Requirements ● 10+ years of experience in data science, applied ML, or AI product leadership, with at least 3–5 years in cybersecurity, enterprise SaaS, or complex data domains. ● Demonstrated experience applying classification, clustering, correlation, and anomaly detection on structured/semi-structured data (e.g., logs, alerts, network events). ● Strong understanding of cybersecurity workflows: detection, investigation, triage, threat hunting, incident response, etc. ● Experience in building data feedback pipelines or reinforcement learning-like systems where user input improves future predictions or decisions. ● Proven ability to scale AI/ML systems across multi-tenant environments or customer-facing platforms. ● Familiarity with platforms such as Snowflake, Google Chronicle, Sentinel (KQL), or SIEM/SOAR tools is a strong plus. ● Exceptional communication and storytelling skills: able to communicate AI strategy to technical and executive stakeholders alike. ● Experience with security-specific ML tooling or frameworks (e.g., security data lakes, Sigma correlation engines, MITRE ATT&CK mapping). ● Prior work in multi-modal learning environments (signals from logs, identity, cloud infra, etc.). ● Deep familiarity with model evaluation, drift detection, and automated retraining in production settings. ● Exposure to or leadership in building agentic AI workflows or co-pilot-style assistant models in the security space. AiStrike is committed to providing equal employment opportunities. All qualified applicants and employees will be considered for employment and advancement without regard to race, color, religion, creed, national origin, ancestry, sex, gender, gender identity, gender expression, physical or mental disability, age, genetic information, sexual or affectional orientation, marital status, status regarding public assistance, familial status, military or veteran status or any other status protected by applicable law. Show more Show less
Posted 1 month ago
0.0 - 40.0 years
0 Lacs
Gurugram, Haryana
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Senior Engineer - Agentic AI Join Boston Scientific at the forefront of innovation as we embrace AI to transform healthcare and deliver cutting‑edge solutions. As a Principal / Senior Engineer – Agentic AI, you will architect and deliver autonomous, goal‑driven agents powered by large language models (LLMs) and multi‑agent frameworks. Your responsibilities will include: Design and implement agentic AI systems leveraging LLMs for reasoning, multi‑step planning, and tool execution. Evaluate and build upon multi‑agent frameworks such as LangGraph, AutoGen, and CrewAI to coordinate distributed problem‑solving agents. Develop context‑handling, memory, and API‑integration layers enabling agents to interact reliably with internal services and third‑party tools. Create feedback‑loop and evaluation pipelines (LangSmith, RAGAS, custom metrics) that measure factual grounding, safety, and latency. Own backend services that scale agent workloads, optimize GPU / accelerator utilization, and enforce cost governance. Embed observability, drift monitoring, and alignment guardrails throughout the agent lifecycle. Collaborate with research, product, and security teams to translate emerging agentic patterns into production‑ready capabilities. Mentor engineers on prompt engineering, tool‑use chains, and best practices for agent deployment in regulated environments. Required Qualifications: 8+ years of software engineering experience, including 3+ years building AI/ML or NLP systems. Expertise in Python and modern LLM APIs (OpenAI, Anthropic, etc.), plus agentic orchestration frameworks (LangGraph, AutoGen, CrewAI, LangChain, LlamaIndex). Proven delivery of agentic systems or LLM‑powered applications that invoke external APIs or tools. Deep knowledge of vector databases (Azure AI Search, Weaviate, Pinecone, FAISS, pgvector) and Retrieval‑Augmented Generation (RAG) pipelines. Hands‑on experience with LLMOps: CI/CD for fine‑tuning, model versioning, performance monitoring, and drift detection. Strong background in cloud‑native micro‑services, security, and observability. Preferred Qualifications: Experience integrating multimodal agents (vision, audio) and reinforcement‑learning feedback loops. • Contributions to open‑source agent frameworks or white papers on autonomous AI. Certifications in cloud GenAI services (AWS Bedrock, Azure OpenAI). Domain knowledge of healthcare, cybersecurity, or other regulated industries. Requisition ID: 608518 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 1 month ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
About Frontier: At Frontier, we help companies grow full-time, cross-functional teams abroad. We hire the smartest people, and we place them in the best companies. We have placed over 800 hires across 50 different US based startups and high growth companies. About FlyFlat FlyFlat is a premium travel company that helps founders, investors, and executives book international business and first-class flights at unbeatable rates—often at 30 to 80 percent less than the retail price. We combine proprietary booking methods with a 24/7 white-glove concierge service, making it incredibly easy for clients and their teams to manage travel without friction. We've grown from a team of 12 to over 75 in the past year, 5x'ed our revenue year over year, and recently raised an oversubscribed round led by Bessemer Venture Partners. Our mission is to make premium travel more accessible, scalable, and efficient for the modern executive class. What began as a 100 percent B2C offering has evolved into a hybrid model serving B2C, B2B2C, and B2B segments. As we build our enterprise and tech platform, design, operations, and people infrastructure have become central to scaling our impact. FlyFlat's Cultural Values Client-First Thinking: No shortcuts when it comes to care, context, and follow-through. Extreme Ownership: You don't wait—you act, fix, follow up, and then improve. Clarity & Candor: You write clearly, speak up early, and communicate proactively. Calm Under Pressure: You stay solution-oriented and composed, even in client-critical moments. Relentless in Standards: "Close enough" is never good enough. We define best-in-class every day. About the Role We are looking for a proactive and systems-oriented People Operations Manager to build leverage across our people function. You will play a key role in creating repeatable systems that support onboarding, training, documentation, and performance reviews, allowing our leadership team to focus on strategic growth rather than manual coordination. This role is ideal for someone who: Loves translating chaos into order Can independently build and manage AI-powered process automation (e.g., Airtable, Notion AI, Zapier, ChatGPT) Has strong emotional intelligence and understands how to support people through structured, well-documented systems Is both a systems thinker and a doer who follows through on details without micromanagement Core Responsibilities Knowledge Management & Documentation Maintain and update our internal knowledge base (e.g., Notion, Google Docs, Connect Team) Translate unstructured inputs from leadership into clear, accessible documentation Identify and fill gaps in team-wide or role-specific documentation Create and version-control templates for SOPs, onboarding, and training Onboarding & Offboarding Ownership Fully own the onboarding/offboarding lifecycle using Connect Team or equivalent tools Coordinate setup of accounts, welcome materials, and checklist-based onboarding Track onboarding step completion and chase blockers to reduce dependency on exec follow-ups Collect feedback at 1-week, 3-week, and exit stages to identify process gaps Training & Shadowing Progress Tracking Maintain a live tracker of each new hire's progress during training and shadowing phases Raise flags on delays, lack of clarity, or underperformance during ramp-up Update training content and documentation based on real-time feedback Performance Review Support Coordinate scheduling and preparation of quarterly and annual performance reviews Maintain templates, timelines, and documentation of review outcomes Ensure consistent documentation and follow-up across teams and cycles Internal Process QA & Ops Hygiene Audit and clean up outdated documents and internal systems Maintain role maps, org charts, and SOP documentation Routinely check that internal processes match what's documented—and update accordingly Training Feedback & Improvement Survey all new hires post-onboarding to gather insights Turn common confusion points into revised documentation or process changes Coordinate short refresher sessions with team leads when process drift is detected Internal Communication & Culture Infrastructure Draft internal communications for process changes, onboarding messages, or reminders Maintain a clean org-wide calendar of performance cycles, onboarding start dates, etc. Help structure async rituals (e.g., shout-outs, wins, onboarding intros) Hiring Funnel – Interview Round 1 Ownership Conduct structured first-round interviews with candidates across roles to assess alignment, motivation, and role clarity Follow a consistent interview script aligned with the role's expectations and hiring manager input Flag misalignments, red flags, or key strengths with clear summaries for next-stage reviewers Identify improvements to the interview process based on patterns in candidate performance or feedback Ideal Candidate Profile 3+ years of experience in people operations, HR, or internal ops roles in a high-growth or remote-first environment Worked with a fast growing startup. Demonstrated ability to design and implement internal systems at scale High proficiency with Notion, Google Workspace, and automation tools like Zapier. Strong written communication and documentation skills Strong judgment, discretion, and interpersonal skills Comfort working with and building simple AI-powered tools to improve documentation and operations Location: Remote (Preference for candidates based in or near Hyderabad for future in-person collaboration) Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
About Sleek Through proprietary software and AI, along with a focus on customer delight, Sleek makes the back-office easy for micro SMEs. We give Entrepreneurs time back to focus on what they love doing growing their business and being with customers. With a surging number of Entrepreneurs globally, we are innovating in a highly lucrative space. We Operate 3 Business Segments Corporate Secretary : Automating the company incorporation, secretarial, filing, Nominee Director, mailroom and immigration processes via custom online robots and SleekSign. We are the market leaders in Singapore with : 5% market share of all new business incorporations. Accounting & Bookkeeping : Redefining what it means to do Accounting, Bookkeeping, Tax and Payroll thanks to our proprietary SleekBooks ledger, AI tools and exceptional customer service. FinTech payments : Overcoming a key challenge for Entrepreneurs by offering digital banking services to new businesses. Sleek launched in 2017 and now has around 15,000 customers across our offices in Singapore, Hong Kong, Australia and the UK. We have around 450 staff with an intact startup mindset. We have achieved >70% compound annual growth in Revenue over the last 5 years and as a result have been recognised by The Financial Times, The Straits Times, Forbes and LinkedIn as one of the fastest growing companies in Asia. Role Backed by world-class investors, we are on track to be one of the few cash flow positive, tech-enabled unicorns based out of The Role : We are looking for an experienced Senior Data Engineer to join our growing team. As a key member of our data team, you will design, build, and maintain scalable data pipelines and infrastructure to enable data-driven decision-making across the organization. This role is ideal for a proactive, detail-oriented individual passionate about optimizing and leveraging data for impactful business : Work closely with cross-functional teams to translate our business vision into impactful data solutions. Drive the alignment of data architecture requirements with strategic goals, ensuring each solution not only meets analytical needs but also advances our core objectives. 3, Be pivotal in bridging the gap between business insights and technical execution by tackling complex challenges in data integration, modeling, and security, and by setting the stage for exceptional data performance and insights. Shape the data roadmap, influence design decisions, and empower our team to deliver innovative, scalable, high-quality data solutions every : Achieve and maintain a data accuracy rate of at least 99% for all business-critical dashboards by start of day (accounting for corrections and job failures), with a 24-business hour detection of error and 5-day correction SLA. 95% of data on dashboards originates from technical data pipelines to mitigate data drift. Set up strategic dashboards based on Business Needs which are robust, scalable, easy and quick to operate and maintain. Reduce costs of data warehousing and pipelines by 30%, then maintaining costs as data needs grow. Achieve 50 eNPS on data services (e.g. dashboards) from key business : Data Pipeline Development : Design, implement, and optimize robust, scalable ETL/ELT pipelines to process large volumes of structured and unstructured data. Data Modeling : Develop and maintain conceptual, logical, and physical data models to support analytics and reporting requirements. Infrastructure Management : Architect, deploy, and maintain cloud-based data platforms (e.g. , AWS, GCP). Collaboration : Work closely with data analysts, business owners, and stakeholders to understand data requirements and deliver reliable solutions, including designing and implementing robust, efficient and scalable data visualization on Tableau or LookerStudio. Data Governance : Ensure data quality, consistency, and security through robust validation and monitoring frameworks. Performance Optimization : Monitor, troubleshoot, and optimize the performance of data systems and pipelines. Innovation : Stay up to date with the latest industry trends and emerging technologies to continuously improve data engineering & Qualifications : Experience : 5+ years in data engineering, software engineering, or a related field. Technical Proficiency Proficiency in working with relational databases (e.g. , PostgreSQL, MySQL) and NoSQL databases (e.g. , MongoDB, Cassandra). Familiarity with big data frameworks like Hadoop, Hive, Spark, Airflow, BigQuery, etc. Strong expertise in programming languages such as Python, NodeJS, SQL etc. Cloud Platforms : Advanced knowledge of cloud platforms (AWS, or GCP) and their associated data services. Data Warehousing : Expertise in modern data warehouses like BigQuery, Snowflake or Redshift, etc. Tools & Frameworks : Expertise in version control systems (e.g. , Git), CI/CD, JIRA pipelines. Big Data Ecosystems / BI : BigQuery, Tableau, LookerStudio. Industry Domain Knowledge : Google Analytics (GA), Hubspot, Accounting/Compliance etc. Soft Skills : Excellent problem-solving abilities, attention to detail, and strong communication Qualifications : Degree in Computer Science, Engineering, or a related field. Experience with real-time data streaming technologies (e.g. , Kafka, Kinesis). Familiarity with machine learning pipelines and tools. Knowledge of data security best practices and regulatory The Interview Process : The successful candidate will participate in the below interview stages (note that the order might be different to what you read below). We anticipate the process to last no more than 3 weeks from start to finish. Whether the interviews are held over video call or in person will depend on your location and the role. Case study. A : 60 minute chat with the Data Analyst, where they will give you some real-life challenges that this role faces, and will ask for your approach to solving them. Career deep dive. A : 60 minute chat with the Hiring Manager (COO). They'll discuss your last 1-2 roles to understand your experience in more detail. Behavioural fit assessment. A : 60 minute chat with our Head of HR or Head of Hiring, where they will dive into some of your recent work situations to understand how you think and work. Offer + reference interviews. We'll Make a Non-binding Offer Verbally Or Over Email, Followed By a Couple Of Short Phone Or Video Calls With References That You Provide To For Background Screening Please be aware that Sleek is a regulated entity and as such is required to perform different levels of background checks on staff depending on their role. This may include using external vendors to verify the below : Your education. Any criminal history. Any political exposure. Any bankruptcy or adverse credit history. We will ask for your consent before conducting these checks. Depending on your role at Sleek, an adverse result on one of these checks may prohibit you from passing probation. (ref:hirist.tech) Show more Show less
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Overview We are looking for a skilled and passionate Flutter Engineer (SDE 2) to join our mobile development team. In this role, you'll be responsible for building high-quality, cross-platform mobile applications that offer seamless and engaging user experiences. You will take ownership of key product features, collaborate with cross-functional teams, and apply engineering best practices to deliver scalable and maintainable code. This is a great opportunity to grow your expertise while making a meaningful impact in a fast-paced, product-driven environment. Responsibilities Design, develop, and maintain cross-platform mobile applications using Flutter and Dart. Collaborate with product managers, designers, and backend engineers to implement new features from API integration to UI/UX. Write clean, maintainable, and testable code while following industry best practices and architecture patterns. Troubleshoot and resolve bugs, performance bottlenecks, and technical issues. Maintain a customer-first mindset, ensuring a great user experience across all devices. Take ownership of modules or components, working both independently and collaboratively with the team. Stay updated with the latest Flutter and mobile development trends and technologies. Use version control tools like Git for efficient code collaboration and management. Participate in code reviews and provide thoughtful feedback to improve code quality and consistency. Contribute to CI/CD pipelines to ensure smooth and reliable app releases. Requirements Must Have Proven experience in developing and deploying mobile applications using Flutter and Dart. Strong understanding of Flutter architecture patterns such as BLoC, Provider, Riverpod, or MVVM. Good knowledge of mobile development principles, UI/UX design, and app architecture. Experience with RESTful API integration and a solid grasp of API design. Proficiency in debugging, performance profiling, and optimization. Strong problem-solving skills with a “build fast and iterate” mindset. Excellent communication and collaboration skills. Comfortable working in a dynamic, fast-paced environment. Good to Have Experience with state management solutions like Riverpod, GetX, or MobX. Familiarity with Flutter’s new features such as Flutter Web, Flutter Desktop, or integration with native modules. Exposure to automated testing (unit, widget, and integration tests) using tools like Mockito, flutter_test, etc. Understanding of local databases (e.g., SQLite, Hive, Drift). Experience with CI/CD tools and deployment to Play Store and App Store. Familiarity with animations and building rich UI/UX experiences. Understanding of SOLID principles and clean code practices. APPLY NOW Show more Show less
Posted 1 month ago
2.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Must-Have Skills & Traits Core Engineering Advanced Python skills with a strong grasp of clean, modular, and maintainable code practices Experience building production-ready backend services using frameworks like FastAPI, Flask, or Django Strong understanding of software architecture, including RESTful API design, modularity, testing, and versioning. Experience working with databases (SQL/NoSQL), caching layers, and background job queues. AI/ML & GenAI Expertise Hands-on experience with machine learning workflows: data preprocessing, model training, evaluation, and deployment Practical experience with LLMs and GenAI tools such as OpenAI APIs, Hugging Face, LangChain, or Transformers Understanding of how to integrate LLMs into applications through prompt engineering, retrieval-augmented generation (RAG), and vector search Comfortable working with unstructured data (text, images) in real-world product environments Bonus: experience with model fine-tuning, evaluation metrics, or vector databases like FAISS, Pinecone, or Weaviate Ownership & Execution Demonstrated ability to take full ownership of features or modules from architecture to delivery Able to work independently in ambiguous situations and drive solutions with minimal guidance Experience collaborating cross-functionally with designers, PMs, and other engineers to deliver user-focused solutions Strong debugging, systems thinking, and decision-making skills with an eye toward scalability and performance Nice-to-Have Skills Experience in startup or fast-paced product environments. 2-5 years of relevant experience. Familiarity with asynchronous programming patterns in Python. Exposure to event-driven architecture and tools such as Kafka, RabbitMQ, or AWS EventBridge Data science exposure: exploratory data analysis (EDA), statistical modeling, or experimentation Built or contributed to agentic systems, ML/AI pipelines, or intelligent automation tools Understanding of MLOps: model deployment, monitoring, drift detection, or retraining pipelines Frontend familiarity (React, Tailwind) for prototyping or contributing to full-stack features Show more Show less
Posted 1 month ago
2.0 - 6.0 years
5 - 11 Lacs
India
On-site
We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Experience: 2 - 6 years of experience Salary : 5-11 LPA Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,100,000.00 per year Schedule: Day shift Work Location: In person
Posted 1 month ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Description AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Excited by using massive amounts of data to develop Machine Learning (ML) and Deep Learning (DL) models? Want to help the largest global enterprises derive business value through the adoption of Artificial Intelligence (AI)? Eager to learn from many different enterprise’s use cases of AWS ML and DL? Thrilled to be key part of Amazon, who has been investing in Machine Learning for decades, pioneering and shaping the world’s AI technology? At AWS ProServe India LLP (“ProServe India”), we are helping large enterprises build ML and DL models on the AWS Cloud. We are applying predictive technology to large volumes of data and against a wide spectrum of problems. Our Professional Services organization works together with our internal customers to address business needs of AWS customers using AI. AWS Professional Services is a unique consulting team in ProServe India. We pride ourselves on being customer obsessed and highly focused on the AI enablement of our customers. If you have experience with AI, including building ML or DL models, we’d like to have you join our team. You will get to work with an innovative company, with great teammates, and have a lot of fun helping our customers. If you do not live in a market where we have an open Data Scientist position, please feel free to apply. Our Data Scientists can live in any location where we have a Professional Service office. Key job responsibilities Responsibilities A successful candidate will be a person who enjoys diving deep into data, doing analysis, discovering root causes, and designing long-term solutions. It will be a person who likes to have fun, loves to learn, and wants to innovate in the world of AI. Major responsibilities include: Understand the internal customer’s business need and guide them to a solution using our AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances . Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. Use Deep Learning frameworks like MXNet, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our internal customers build DL models. Use SparkML and Amazon Machine Learning (AML) to help our internal customers build ML models. Work with our Professional Services Big Data consultants to analyze, extract, normalize, and label relevant data. Work with our Professional Services DevOps consultants to help our internal customers operationalize models after they are built. Assist internal customers with identifying model drift and retraining models. Research and implement novel ML and DL approaches, including using FPGA. This role is open for Mumbai/Pune/Bangalore/Chennai/Hyderabad/Delhi/Pune. About The Team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications 7+ years of professional or military experience, including a Bachelor's degree. 7+ years managing complex, large-scale projects with internal or external customers. Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. Skilled in using Deep Learning frameworks (MXNet, Caffe2, TensorFlow, Theano, CNTK, Keras) and ML tools (SparkML, Amazon Machine Learning) to build models for internal customers. Preferred Qualifications 7+ years of IT platform implementation in a technical and analytical role experience. Experience in consulting, design and implementation of serverless distributed solutions. Experienced in databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) and managing complex, large-scale customer-facing projects. Experienced as a technical specialist in design and architecture, with expertise in cloud-based solutions (AWS or equivalent), systems, networks, and operating systems. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Karnataka Job ID: A3009199 Show more Show less
Posted 1 month ago
7.0 years
40 Lacs
India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
7.0 years
40 Lacs
Kochi, Kerala, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
7.0 years
40 Lacs
Greater Bhopal Area
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
7.0 years
40 Lacs
Indore, Madhya Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France