Jobs
Interviews

1477 Vertex Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We're on the lookout for an experienced MLOps Engineer to support our growing AI/ML initiatives, including GenAI platforms, agentic AI systems, and large-scale model deployments. Experience - 7+ years Location - Pune Notice Period - Short Joiners Primary Skills - Cloud -google cloud preferred but cloud any will do ML deployment using Jenkins/Harness Python Kubernetes Terraform Key Responsibilities Build and manage CI/CD pipelines for ML model training, RAG systems, and LLM workflows Optimize GPU-powered Kubernetes environments for distributed compute Manage cloud-native infrastructure across AWS or GCP using Terraform Deploy vector databases, feature stores, and observability tools Ensure security, scalability, and high availability of AI workloads Collaborate cross-functionally with AI/ML engineers and data scientists Enable agentic AI workflows using tools like LangChain, LangGraph, CrewAI, etc. What We’re Looking For 4+ years in DevOps/MLOps/Infra Engineering, including 2+ years in AI/ML setups Hands-on with AWS SageMaker, GCP Vertex AI, or Azure ML Proficient in Python, Bash, and CI/CD tools (GitHub Actions, ArgoCD, Jenkins) Deep Kubernetes expertise and experience managing GPU infra Strong grasp on monitoring, logging, and secure deployment practices 💡 Bonus Points For 🔸 Familiarity with MLflow, Kubeflow, or similar 🔸 Experience with RAG, prompt engineering, or model fine-tuning 🔸 Knowledge of model drift detection and rollback strategies Ready to take your MLOps career to the next level? Apply now or email me on amruta.bu@peoplefy.com for more details

Posted 4 days ago

Apply

4.0 years

0 Lacs

Madhya Pradesh

On-site

About Alphanext Alphanext is a global talent solutions company with offices in London, Pune, and Indore. We connect top-tier technical talent with forward-thinking organizations to drive innovation and transformation through technology. Position Summary Alphanext is hiring an AI Data Scientist with deep expertise in machine learning, deep learning, and statistical modeling. This role requires end-to-end experience in designing, developing, and deploying AI solutions in production environments. Ideal candidates will bring a strong analytical foundation, hands-on technical capability in Python/SQL, and a passion for solving complex business problems using AI. Key Responsibilities Research, design, and implement advanced machine learning and deep learning models for predictive and generative AI use cases. Apply statistical methods to ensure model interpretability, robustness, and reproducibility. Conduct large-scale data analysis to uncover trends, patterns, and opportunities for AI-based automation. Collaborate with ML engineers to validate, train, optimize, and deploy AI models into production environments. Continuously improve model performance using techniques such as hyperparameter tuning, feature engineering, and ensemble methods. Keep up to date with the latest advancements in AI, deep learning, and statistical modeling to propose innovative solutions. Translate complex analytical outputs into clear insights for both technical and non-technical stakeholders. Required Skills 4+ years of experience in machine learning and deep learning with a focus on model development, optimization, and deployment. Strong programming skills in Python and SQL . Proven experience in developing and applying mathematical/statistical models for business applications. Proficiency in data visualization tools such as Power BI . Hands-on experience in at least one of the following domains: finance, trading, biomedical modeling, image-based AI, or recommender systems. Strong understanding of statistical theory and modern AI algorithms. Excellent communication skills and the ability to explain complex models in a simple, impactful manner. Nice to Have / Preferred Skills Experience deploying AI solutions in production environments using MLOps practices. Familiarity with cloud-based AI/ML services (e.g., AWS SageMaker, Azure ML, GCP Vertex AI). Exposure to compliance, data privacy, or ethical AI practices in enterprise settings. Background in applied mathematics or statistics with a focus on AI model interpretability. Qualifications Master//'s or PhD in Statistics, Mathematics, Computer Science, or a related technical field. Demonstrated track record of AI model deployment in real-world business applications.

Posted 4 days ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Kapiva: Kapiva is a modern ayurvedic nutrition brand focused on bringing selectively sourced, natural foods to Indian consumers. We help bring the wisdom of India’s ancient food traditions to modern-day consumers. Kapiva's high-quality portfolio offers products across various therapy areas including skin & hair, digestion, weight management, immunity, chronic issues, & much more. Our products are top performers on online marketplaces (Amazon, Flipkart, Big Basket, Nykaa, etc), we're growing our presence offline in a big way (Reliance, Nature’s Basket, Noble Plus etc.) & are funded by India’s best Consumer VC Funds – Orbimed Fireside Ventures, Vertex Ventures, Sharrp Ventures, 3One4 Capital & Jetty Ventures. The team at Kapiva is headed by the Co-Founders: Ameve Sharma: worked @McKinsey, @Baidyanath studied @INSEAD & NYU https://www.linkedin.com/in/amevesharma/ Shantanu S: Ex-PG & CMO Uniqlo India ; IIM C Alum https://www.linkedin.com/in/shantanu-b535a010/ Anuj Sharma: Ex- Hotstar , Ex- Myntra ; ISB https://www.linkedin.com/in/anuj-sharma-28033a40/ We’re creating a new brand with a unique story & are looking for a team to help us grow our business. You’d be joining a young, hungry team with a culture of honesty, humility, & hard work. Role Overview The Head of App & Website will be responsible for overseeing the app and web platforms, ensuring seamless user experiences, managing personalization initiatives, driving app growth, and enhancing customer retention through merchandise and conversion optimization strategies. Key Responsibilities: App Growth and Feature Marketing: Product Marketing - Identify the key features to be developed on App based on consumer Insights and Company Objective - Develop and execute app growth strategies including user acquisition, activation, and retention. - Collaborate with the marketing team to promote new app features and service offerings to consumers. - Implement in-app marketing campaigns and work with CRM team for personalized push notifications. - Leverage data insights to recommend new features and enhancements to improve app stickiness and user engagement. - Monitor app store performance, optimize ASO (App Store Optimization), and ensure a strong app rating. Conversion Rate and AOV Optimization on App and Store : CRO Specialist - Drive initiatives to improve Conversion Rate (CVR) and Average Order Value (AOV). - Implement cross-sell and upsell strategies, leveraging customer data and behavioral insights. - Develop engaging content that complements product and service offerings. Store & App Operations Management - Manage Product Detail Pages (PDPs), Category pages, Storefronts, and Content. - Ensure accurate product and service information is displayed. - Implement dynamic pricing strategies in collaboration with category and pricing teams. - Leveraging App/Website Merchandise to drive revenue 1. Collaborate with Category, CRM, and Tech Teams to execute effective merchandising strategies. 2. Conduct A/B tests on banner placements, CTA buttons, and user flows to enhance user experience and conversion. 3. Continuously analyse and refine the customer journey using performance data. Personalized journey for Consumers on App and Website - Optimize homepages, category pages, and landing pages to maximize conversions. - Manage dynamic banners and product placements based on user behaviour and purchase history. - Develop and implement personalized product recommendations using AI- driven tools. Cross-Functional Collaboration - Align closely with the Retention, Product, Category, and Technology Teams. - Provide feedback on platform improvements to enhance user satisfaction. - Partner with the analytics team to measure campaign performance and implement data-driven decisions. Key Performance Indicators (KPIs) - App User Growth (downloads, activations, and active users) - App Feature Adoption and Engagement - App Retention and Churn Rates - Conversion Rate (CVR) and Average Order Value (AOV) - Personalization Impact on Revenue - App Store Ratings and Reviews - A/B Testing Success Metrics Qualifications: - Bachelor’s or Master’s degree in Business, Marketing, E-commerce, or a related field. - 0-3 years of experience in e-commerce operations, preferably in the healthcare or wellness sector. - Strong Problem solver, first principal thinking, strategy background with Product Understanding (Preferred) - Strong understanding of personalization platforms, A/B testing tools, and web analytics (Not mandatory) Key Competencies: - Data-driven decision-making - Customer-centric mindset - Strong analytical and problem-solving abilities - Effective communication and stakeholder management - Adaptability and continuous learning

Posted 4 days ago

Apply

3.0 years

0 Lacs

India

Remote

Now Hiring: Automation Developer Location: India (Remote-first | Global team) Team: Randstad Enterprise – Global Automation Centre of Excellence (CoE) At Randstad Enterprise’s Global Automation CoE, we’re not just automating processes — we’re transforming how work gets done. Using GenAI, RPA, and low-code technologies, we build solutions that drive meaningful impact across global teams. We're looking for an experienced Automation Developer to join our team and help deliver intelligent automation solutions across North America, EMEA, and APAC. Key Responsibilities Design, build, and support automation workflows using RPA (Automation Anywhere), low-code platforms, and AI Translate business requirements into scalable, secure, and maintainable solutions Collaborate with global stakeholders on architecture, design, and deployment Conduct testing, documentation, and performance tuning Stay current with emerging automation and AI technologies What You’ll Bring 3+ years of experience in automation development (RPA, ML/AI, low-code) Proficiency in Python, C#, JavaScript, SQL, and integration APIs Familiarity with GenAI, NLP, LLMs, and cloud platforms (GCP Vertex AI preferred) Ability to work independently and contribute in a global, cross-functional team Why Join Us Work on high-impact, global automation projects Access to the latest AI and automation tools Collaborative, innovation-driven culture Opportunities to grow, learn, and lead Ready to build the future of work? Let’s talk.

Posted 4 days ago

Apply

4.0 - 7.0 years

5 - 9 Lacs

Chennai

Work from Office

Responsibilities SAP S/4 Hana FI TAX with Vertex knowledge Technical Skill sets: Should have worked on at least one Implementation & two support projects on SAP S/4 HANA with tax in O2C and P2P. Should have good experience with withholding tax (TDS) and VAT. Must have experience in VAT configuration such as tax procedures, tax keys, tax conditions and input/output tax codes. Must have experience in withholding tax configuration such as WHT codes, types, keys, master data. Perform systems review and analysis for the conversion of in-house developed business applications, master data, and re engineer business practices to facilitate standardization to a single SAP platform. Responsible for the SAP configuration for external tax calculation 02C and P2P. Configuration of SAP Pricing with Tax Procedures for business organizations. Develop and update business process documentation utilizing confidential technical WRICEF Project Management methodology. Completed process flow documentation for support organization and end-user guides - illustrated BPP and FAQ sheets. Develop close-loop regression testing procedure for inbound/outbound processing with legacy systems utilizing iDocs and XML documents Designed custom report for the balancing and reconciliation of SAP financial account data of tax and Vertex Reporting and Returns databases. Requirements Must be expert in writing Functional Specifications independently and create Custom Objects from Scratch to Deployments. Should have good experience on interfaces with third party systems. Vertex Must have knowledge on Vertex (Tax Engine) and mapping concept. Must have knowledge on tax calculations on Vertex and comparison to SAP S/4 Hana tax module Provide technical guidance for development and coding for industry specific excise tax processing, compliance and reporting General knowledge and tools: Excellent communication & strong collaboration skills Flexible to adapt to fast changing environment and self-motivated Creating technical design specifications to ensure compliance with the functional teams and IT Management Analytical thinking, high level of comprehension and independent working style Seeking candidates who are flexible and willing to work on shifts as required

Posted 5 days ago

Apply

0 years

0 Lacs

India

Remote

Job Title: Machine Learning Developer Company: Lead India Location: Remote Job Type: Full-Time Salary: ₹3.5 LPA About Lead India: Lead India is a forward-thinking organization focused on creating social impact through technology, innovation, and data-driven solutions. We believe in empowering individuals and building platforms that make governance more participatory and transparent. Job Summary: We are looking for a Machine Learning Developer to join our remote team. You will be responsible for building and deploying predictive models, working with large datasets, and delivering intelligent solutions that enhance our platform’s capabilities and user experience. Key Responsibilities: Design and implement machine learning models for classification, regression, and clustering tasks Collect, clean, and preprocess data from various sources Evaluate model performance using appropriate metrics Deploy machine learning models into production environments Collaborate with data engineers, analysts, and software developers Continuously research and implement state-of-the-art ML techniques Maintain documentation for models, experiments, and code Required Skills and Qualifications: Bachelor’s degree in Computer Science, Data Science, or a related field (or equivalent practical experience) Solid understanding of machine learning algorithms and statistical techniques Hands-on experience with Python libraries such as scikit-learn, pandas, NumPy, and matplotlib Familiarity with Jupyter notebooks and experimentation workflows Experience working with datasets using tools like SQL or Excel Strong problem-solving skills and attention to detail Ability to work independently in a remote environment Nice to Have: Experience with deep learning frameworks like TensorFlow or PyTorch Exposure to cloud-based ML platforms (e.g., AWS SageMaker, Google Vertex AI) Understanding of model deployment using Flask, FastAPI, or Docker Knowledge of natural language processing or computer vision What We Offer: Fixed annual salary of ₹3.5 LPA 100% remote work and flexible hours Opportunity to work on impactful, mission-driven projects using real-world data Supportive and collaborative environment for continuous learning and innovation

Posted 5 days ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Description – AI Developer (Agentic AI Frameworks, Computer Vision & LLMs) Location (Hybrid - Bangalore) About the Role We’re seeking an AI Developer who specializes in agentic AI frameworks —LangChain, LangGraph, CrewAI, or equivalents—and who can take both vision and language models from prototype to production. You will lead the design of multi‑agent systems that coordinate perception (image classification & extraction), reasoning, and action, while owning the end‑to‑end deep‑learning life‑cycle (training, scaling, deployment, and monitoring). Key Responsibilities Scope What You’ll Do Agentic AI Frameworks (Primary Focus) Architect and implement multi‑agent workflows using LangChain, LangGraph, CrewAI, or similar. Design role hierarchies, state graphs, and tool integrations that enable autonomous data processing, decision‑making, and orchestration. Benchmark and optimize agent performance (cost, latency, reliability). Image Classification & Extraction Build and fine‑tune CNN/ViT models for classification, detection, OCR, and structured data extraction. Create scalable data‑ingestion, labeling, and augmentation pipelines. LLM Fine‑Tuning & Retrieval‑Augmented Generation (RAG) Fine‑tune open‑weight LLMs with LoRA/QLoRA, PEFT; perform SFT, DPO, or RLHF as needed. Implement RAG pipelines using vector databases (FAISS, Weaviate, pgvector) and domain‑specific adapters. Deep Learning at Scale Develop reproducible training workflows in PyTorch/TensorFlow with experiment tracking (MLflow, W&B). Serve models via TorchServe/Triton/KServe on Kubernetes, SageMaker, or GCP Vertex AI. MLOps & Production Excellence Build robust APIs/micro‑services (FastAPI, gRPC). Establish CI/CD, monitoring (Prometheus, Grafana), and automated retraining triggers. Optimize inference on CPU/GPU/Edge with ONNX/TensorRT, quantization, and pruning. Collaboration & Mentorship Translate product requirements into scalable AI services. Mentor junior engineers, conduct code and experiment reviews, and evangelize best practices. Minimum Qualifications B.S./M.S. in Computer Science, Electrical Engineering, Applied Math, or related discipline. 5+ years building production ML/DL systems with strong Python & Git . Demonstrable expertise in at least one agentic AI framework (LangChain, LangGraph, CrewAI, or comparable). Proven delivery of computer‑vision models for image classification/extraction. Hands‑on experience fine‑tuning LLMs and deploying RAG solutions. Solid understanding of containerization (Docker) and cloud AI stacks (AWS/Azure). Knowledge of distributed training, GPU acceleration, and performance optimization. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Job Type: Full-time Pay: Up to ₹1,200,000.00 per year Experience: AI, LLM, RAG: 4 years (Preferred) Vector database, Image classification: 4 years (Preferred) containerization (Docker): 3 years (Preferred) ML/DL systems with strong Python & Git: 3 years (Preferred) LangChain, LangGraph, CrewAI: 3 years (Preferred) Location: Bangalore, Karnataka (Preferred) Work Location: In person

Posted 5 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Position: Persistent is scaling up its global Digital Trust practice. Digital Trust encompasses the domains of Data Privacy, Responsible AI (RAI), GRC (Governance, Risk & Compliance), and other related areas. This is a rapidly evolving domain globally that is at the intersection of technology, law, ethics, and compliance. Team members of this practice get an opportunity to work on innovative and cutting-edge solutions. We are looking for a highly motivated and technically skilled Responsible AI Testing Analyst with 1–3 years of experience to join our Digital Trust team. In this role, you will be responsible for conducting technical testing and validation of AI systems or agents against regulatory and ethical standards, such as the EU AI Act, AI Verify (Singapore), NIST AI RMF, and ISO 42001. This is a technical position requiring knowledge of AI/ML models, testing frameworks, fairness auditing, explainability techniques, and regulatory understanding of Responsible AI. Role: AI Testing Analyst Location: All PSL Location Experience: 1-3 years Job Type: Full Time Employment What You’ll Do: Perform technical testing of AI systems and agents using pre-defined test cases aligned with regulatory and ethical standards. Conduct model testing for risks such as bias, robustness, explainability, and data drift using AI assurance tools or libraries. Support the execution of AI impact assessments and document the test results for internal and regulatory audits. Collaborate with stakeholders to define assurance metrics and ensure adherence to RAI principles. Assist in setting up automated pipelines for continuous testing and monitoring of AI/ML models. Prepare compliance-aligned reports and dashboards showcasing test results and conformance to RAI principles. Expertise You’ll Bring : 1 to 3 years of hands-on experience in AI/ML model testing, validation, or AI assurance roles. Experience with testing AI principles such as fairness, bias detection, robustness, accuracy, explainability, and human oversight. Practical experience with tools like AI Fairness 360, SHAP, LIME, What-If Tool, or commercial RAI platforms Ability to run basic model tests using Python libraries (e.g., scikit-learn, pandas, numpy, tensorflow/keras, PyTorch). Understanding of regulatory implications of high-risk AI systems and how to test for compliance. Strong documentation skills to communicate test findings in an auditable and regulatory-compliant manner. Preferred Certifications (any one or more): AI Verify testing framework training (preferred) IBM AI Fairness 360 Toolkit Certification AI Certification (Google Cloud) – Vertex AI + SHAP/LIME ModelOps/MLOps Monitoring with Bias Detection – AWS SageMaker / Azure ML / GCP Vertex AI TensorFlow Developer / Python for Data Science and AI / Applied Machine Learning in Python Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 5 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Primary Purpose: To be the overall in charge of the School Development Cell And serve as a fulcrum between the organization and the parent(s) (existing and potential) thereby ensuring excellent parent service and satisfaction. Key Accountabilities/Activities: Primary Responsibilities: Parent Relationship Management To ensure that each parent is a delighted parent and all their requests, concerns, and complaints are handled in a timely and Effectively maintain and develop the parent-organization relationship by ensuring the appropriate solutions to all parent inquiries across mediums i.e. in-person, over the phone, email, company website, etc. Oversee the resolution of all the parent queries which are outside the purview of the RE cell and be a point of escalation/support where necessary. Ensure all complaints are registered in CS Tracker and oversee the resolution of complaints for meeting TAT. Periodically review the past parent queries repository/CRM and innovate to develop nifty solutions towards prompt resolutions. Reach out to the parents (over the phone) post query resolution to seek feedback and improvise, thereby creating a parent delight and positive brand image. Ensure the front desk /relationship desk is manned at all points in time during operational hours to make sure that no parent is left unattended. Manage the set up of the lobby area accentuating the organization brand; placement of posters/standees etc. with assistance from the admin department. Be cognizant of the latest achievements of the organization/center and cascade it as a part of parent interactions/sales conversations. Efficiently guide the parent on school systems and processes and ensure that the repository of updated information is available at all points in time Keep track of all organization advertising manuals/brochures/admission kits and ensure effective information flow. Adroitly handle irate parents and ensure that each parent interface ends at parent delight as far as possible Efficiently make use of all aids available i.e. Hand-outs, Audio visual support to educate the parents on the USPs of the organization, and child education pedagogy followed. Adroitly be ready and facilitate the information of all elements about a child’s life cycle in the school as well as post-school activities, summer camps, etc. Sales and Marketing: Be actively involved in the complete sales cycle; lead the RE team to meet its sales and revenue goals. Carry out Experiential Marketing to all walk-ins i.e..... School Tour, Discovery Room, etc., and Parent Engagements Effectively speaking about the social media presence of the school and the efforts taken to ensure the child gets necessary recognition across relevant media. Devise plans to achieve sales goals and create strategies to meet the annual center targets. Adroitly oversee the entire sales process and interject; where necessary for all potential parents from first interface to closure, thus positively augmenting the conversions from inquiry to admissions. Create power points on the organization’s growth, values, and strengths and use them at the time of any marketing / promotional activities, under the supervision of Centre Head Carry Out Promotional Events and Activations in Schools – RWA, Parenting Seminar, Hand Bills Distribution, Selfie, Any other initiative. Be updated on the upcoming seminars/ promotional events and nominate as an organization’s representative Be abreast of the competitive school offerings and prevalent market practices Introduce and work on Pre-School and Corporate Tie-Ups and support the teams by providing leads and helping in faster closures. Initiate and participate in Marketing Initiatives to create brand awareness and promote the USPs like Summer Camps, Day Care, PSA activities, etc. Administrative Responsibilities Manage admission registration manually and on ERP as per the Process guidelines and generate MIS. Keep track of all the parent grievance handlings resolute at the Centre Work closely with the Vertex Marketing Team for any updates/ intimations. Collation and timely reporting of the Parent Enquiry and Follow up trackers' basis the internally agreed turnaround time. Scrutinize and maintain records for all admission forms and documents Ensure seamless execution of all Leave Certification Requests i.e. Verification of LC request forms, Request Generation, Intra –department liaisoning, etc. Ensuring that each LC is personally attended to and tried in the best possible manner to retain. ZERO LC should be the focus (except transfers) Be an active participant in School events like; VIVA, Coffee Meet, etc. Secondary Responsibilities: People Management and Up-skilling: Be an effective planner and organize the day to ensure all opportunities are maximized. Effectively manage the RE cell team; coach, inspire and provide actionable and constructive feedback, provide on-the-job training to improve team performance. Train the RE cell team on the new USPs being introduced in the organization. Motivate and inspire the teams to perform better. Business Acumen Enhancement: Be updated on past sales trends and records and consistently upgrade one’s understanding. Keep aware of the latest news in the education industry and make use of wherever found necessary for team knowledge enhancement. Have a detailed understanding of school manual wrt to staff, children, etc. Participate in training workshops on Sales and Marketing and keep updated on the latest trends. Work Relations: Internal: Reporting to the Principal for all administrative issues and Sales and Service Head for functional reporting Interfacing with Vertex Academics Management (Principal and Coordinators) Interfacing with Finance, Technology, and HR for any people or any other operational issue/s External: Interface with potential and existing parents Interface with external vendors towards any marketing initiative execution Qualification: Graduate/Postgraduate in any discipline preferably in Business Administration or Marketing Span of Control: Relationship Executive, Counsellors Experience: 4-6yrs with prior work experience in Education and Marketing space. Expected Competencies: Strong people manager Strong conviction skills Ability to manage multiple tasks/processes. Ability to prioritize workload; work effectively under pressure and stringent deadlines Ability to present, discuss and respond to parent inquiries Strong understanding of business concepts and dynamics of the organization Exceptional time management skills and strong attention to detail Strong parent-oriented approach, articulate and friendly personality Strong Communication Skills and Telephone Etiquette Demonstrated track record of initiative, creativity, and motivation Highly flexible, resilient, and zest to work in ambiguous work environments

Posted 5 days ago

Apply

0 years

0 Lacs

India

On-site

The candidate should have experience in AI Development including experience in developing, deploying, and optimizing AI and Generative AI solutions. The ideal candidate will have a strong technical background, hands-on experience with modern AI tools and platforms, and a proven ability to build innovative applications that leverage advanced AI techniques. You will work collaboratively with cross-functional teams to deliver AI-driven products and services that meet business needs and delight end-users. Key Prerequisites  Experience in AI and Generative AI Development  Experience in Design, develop, and deploy AI models for various use cases, such as predictive analytics, recommendation systems, and natural language processing (NLP).  Experience in Building and fine-tuning Generative AI models for applications like chatbots, text summarization, content generation, and image synthesis.  Experience in implementation and optimization of large language models (LLMs) and transformer-based architectures (e.g., GPT, BERT).  Experience in ingestion and cleaning of data  Feature Engineering and Data Engineering  Experience in Design and implementation of data pipelines for ingesting, processing, and storing large datasets.  Experience in Model Training and Optimization  Exposure to deep learning models and fine-tuning pre-trained models using frameworks like TensorFlow, PyTorch, or Hugging Face.  Exposure to optimization of models for performance, scalability, and cost efficiency on cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI).  Hands-on experience in monitoring and improving model performance through retraining and evaluation metrics like accuracy, precision, and recall. AI Tools and Platform Expertise  OpenAI, Hugging Face  MLOps tools  Generative AI-specific tools and libraries for innovative applications. Technical Skills 1. Strong programming skills in Python (preferred) or other languages like Java, R, or Julia. 2. Expertise in AI frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, and Hugging Face. 3. Proficiency in working with transformer-based models (e.g., GPT, BERT, T5, DALL-E). 4. Experience with cloud platforms (AWS, Azure, Google Cloud) and containerization tools (Docker, Kubernetes). 5. Solid understa

Posted 5 days ago

Apply

0 years

0 Lacs

India

On-site

You might be a fit if you have ● 5 + yrs production ML / data-platform engineering (Python or Go/Kotlin). ● Deployed agentic or multi-agent systems (e.g., micro-policy nets, bandit ensembles) and reinforcement-learning pipelines at scal (ad budget, recommender, or game AI). ● Fluency with BigQuery / Snowflake SQL & ML plus streaming (Kafka / Pub/Sub). ● Hands-on LLM fine-tuning using LoRA/QLoRA and proven prompt-engineering skills (system / assist hierarchies, few-shot, prompt compression). ● Comfort running GPU & CPU model serving on GCP (Vertex AI, GKE, or bare-metal K8s). ● Solid causal-inference experience (CUPED, diff-in-diff, synthetic control, uplift). ● CI/CD, IaC (Terraform or Pulumi) & observability chops (Prometheus, Grafana). ● Bias toward shipping working software over polishing research papers. Bonus points for: ● Postal/geo datasets, ad-tech, or martech domain exposure. ● Packaging RL models as secure micro-services. ● VPC-SC, NIST, or SOC-2 controls in a regulated data environment. ● Green-field impact – architect the learning stack from scratch. ● Moat-worthy data – 260 M+ US consumer graph tying offline & online behavior. ● Tight feedback loops – your models go live in weeks, optimizing large amounts of marketing spend daily.

Posted 5 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Who are we? At Spyne, we are transforming how cars are marketed and sold with cutting-edge Generative AI. What started as a bold idea—using AI-powered visuals to help auto dealers sell faster online—has now evolved into a full-fledged, AI-first automotive retail ecosystem. Backed by $16M in Series A funding from Accel, Vertex Ventures, and other top investors, we’re scaling at breakneck speed: Launched industry-first AI-powered Image, Video & 360° solutions for Automotive dealers Launching Gen AI powered Automotive Retail Suite to power Inventory, Marketing, CRM for dealers Onboarded 1500+ dealers across US, EU and other key markets in the past 2 years of launch Gearing up to onboard 10K+ dealers across global market of 200K+ dealers 150+ members team with near equal split on R&D and GTM Learn more about our products: Spyne AI Products - StudioAI , RetailAI Series A Announcement - CNBC-TV18 , Yourstory More about us - ASOTU , CNBC Awaaz JOB DESCRIPT IONSpyne isn’t just growing—we’re exploding! With a 300% YoY revenue surge and a rapidly expanding global footprint, we’re on a mission to revolutionize the Global Automotive Industry. We’re seeking a dynamic, razor-sharp Chief of Staff to be the ultimate force multiplier for our CEO and leadership team. This isn’t just a role—it’s a career-defining ride. You’ll work shoulder-to-shoulder with the CEO and business heads to drive high-impact strategy, execution, and scale. From pioneering data-driven decisions (think: optimizing a double digit in million business) to spearheading cross-functional projects, you’ll be the strategic powerhouse ensuring Spyne dominates the market. JOB RESPONSIBILITIES 1) Strategic Leadership & Execution Be the CEO’s Right H and: Partner directly with the CEO to set company-wide strategy, drive execution, and amplify decision-making—turning vision into scalable, high-impact results. Data-Driven Growth Archi tect: Own the company’s analytics engine—design dashboards, uncover game-changing insights, and supercharge performance with actionable intelligence. Financial & Operational Master mind: Craft bulletproof budgets, forecasts, and operating plans while optimizing cost efficiency, revenue growth, and profitability. 2) Performance & Scale Own the Numbers : Lead MIS reporting, revenue/cost analysis, and variance tracking—ensuring every metric aligns with aggressive growth targets. Turbocharge Execution : Define company-wide OKRs, establish razor-sharp tracking mechanisms, and relentlessly drive accountability across teams. Elevate Leadership C adence: Streamline executive meetings, board prep, and quarterly reviews—keeping Spyne laser-focused on winning. 3) Talent & Culture Amplifier Be a Talent Magnet : Attract and secure top 1% talent—selling Spyne’s vision to elite candidates who will fuel our next growth phase. Bridge the Gap: Serve as the critical link between the CEO and teams—ensuring seamless communication, alignment, and explosive collaboration. Process Innovator : Build scalable systems that keep Spyne agile, efficient, and unstoppable as we 10X our impact 4) Drive Internal Automation with Agentic AI. Build interna l tools that function as co-pilots for Sales, CS, Marketing, Finance , and HR Enable AI agents to generate reports, schedule follow-ups, escalate risk, and suggest campaign ideas without human intervention Automate onboarding , revenue leakage tracking, payout reconciliations, and internal approvals through intelligent workflows JOB REQUIREMENTS 1) Unmatched Drive with high pedigree Top-Tier Education: Graduated from IIT, Ivy League, or a top global MBA program (think Harvard, Stanford, Wharton, INSEAD) Proven Track Record: 4+ years in strategy consulting (McKinsey, BCG, Bain) or a high-growth startup’s Founder’s Office, where you’ve moved needles and scaled chaos into order. 2) The Mindset of a Unicorn Thrives in Ambiguity: You navigate complexity like a pro, making high-stakes decisions with limited data and maximum impact. Obsessive Precision: Your attention to detail is legendary—no spreadsheet error, strategic blind spot, or execution gap e s capes you. No Task Too Small (or Too Big): You roll up your sleeves with zero ego—whether it’s refining a board deck or overhauling company strategy. 3) Execution at Warp Speed Relentless Problem-Solver: You anticipate fires before they start and build solutions before anyone asks. Master Juggler: You balance competing priorities effortlessly, delivering flawless results under tight deadlines. Calm in the Storm: When things move at 1000 mph, you’re the steady hand guiding Spyne through hyper-growth. Why Spyne? Because Exceptional Talent Deserves Exceptional Growth Culture: High-ownership, zero-politics, execution-first Growth : $5M to $20M ARR trajectory Learning : Work with top GTM leaders and startup veterans Exposure : Global exposure across U.S., EU, and India markets Compensation : Competitive base + performance incentives +stock options This isn’t a job—it’s a career accelerator. At Spyne, high performers don’t just succeed—they 10X themselves.

Posted 5 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Overview of Role As a Data Engineer specializing in AI/ML, you'll be instrumental in designing, building, and maintaining the data infrastructure crucial for training, deploying, and serving our advanced AI and Machine Learning models. You'll work closely with Data Scientists, ML Engineers, and Cloud Architects to ensure data is accessible, reliable, and optimized for high-performance AI/ML workloads, primarily within the Google Cloud ecosystem. Responsibilities Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient ETL/ELT data pipelines to ingest, transform, and load data from various sources into data lakes and data warehouses, specifically optimized for AI/ML consumption. AI/ML Data Infrastructure: Architect and implement the underlying data infrastructure required for machine learning model training, serving, and monitoring within GCP environments. Google Cloud Ecosystem: Leverage a broad range of Google Cloud Platform (GCP) data services including, BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, Vertex AI, Composer (Airflow), and Cloud SQL. Data Quality & Governance: Implement best practices for data quality, data governance, data lineage, and data security to ensure the reliability and integrity of AI/ML datasets. Performance Optimization: Optimize data pipelines and storage solutions for performance, cost-efficiency, and scalability, particularly for large-scale AI/ML data processing. Collaboration with AI/ML Teams: Work closely with Data Scientists and ML Engineers to understand their data needs, prepare datasets for model training, and assist in deploying models into production. Automation & MLOps Support: Contribute to the automation of data pipelines and support MLOps initiatives, ensuring seamless integration from data ingestion to model deployment and monitoring. Troubleshooting & Support: Troubleshoot and resolve data-related issues within the AI/ML ecosystem, ensuring data availability and pipeline health. Documentation: Create and maintain comprehensive documentation for data architectures, pipelines, and data models. Qualifications 1-2+ years of experience in Data Engineering, with at least 2-3 years directly focused on building data pipelines for AI/ML workloads. Deep, hands-on experience with core GCP data services such as BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, and Composer/Airflow. Strong proficiency in at least one relevant programming language for data engineering (Python is highly preferred).SQL skills for complex data manipulation, querying, and optimization. Solid understanding of data warehousing concepts, data modeling (dimensional, 3NF), and schema design for analytical and AI/ML purposes. Proven experience designing, building, and optimizing large-scale ETL/ELT processes. Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop) and concepts. Exceptional analytical and problem-solving skills, with the ability to design solutions for complex data challenges. Excellent verbal and written communication skills, capable of explaining complex technical concepts to both technical and non-technical stakeholders. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class.

Posted 5 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Role: Site Reliability Engineers (SREs) in Google Cloud Platform (GCP) and RedHat OpenShift administration. Responsibilities System Reliability: Ensure the reliability and uptime of critical services and infrastructure. Google Cloud Expertise: Design, implement, and manage cloud infrastructure using Google Cloud services. Automation: Develop and maintain automation scripts and tools to improve system efficiency and reduce manual intervention. Monitoring and Incident Response: Implement monitoring solutions and respond to incidents to minimize downtime and ensure quick recovery. Collaboration: Work closely with development and operations teams to improve system reliability and performance. Capacity Planning: Conduct capacity planning and performance tuning to ensure systems can handle future growth. Documentation: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Qualifications Education: Bachelor’s degree in computer science, Engineering, or a related field. Experience: 4+ years of experience in site reliability engineering or a similar role. Skills Proficiency in Google Cloud services (Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, etc.). Familiarity with Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.) Experience with automation tools (Terraform, Ansible, Puppet). Familiarity with CI/CD pipelines and tools (Azure pipelines Jenkins, GitLab CI, etc.). Strong scripting skills (Python, Bash, etc.). Knowledge of networking concepts and protocols. Experience with monitoring tools (Prometheus, Grafana, etc.). Preferred Certifications Google Cloud Professional DevOps Engineer Google Cloud Professional Cloud Architect Red Hat Certified Engineer (RHCE) or similar Linux certification Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 5 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Responsibilities: System Reliability: Ensure the reliability and uptime of critical services and infrastructure. Google Cloud Expertise: Design, implement, and manage cloud infrastructure using Google Cloud services. Automation: Develop and maintain automation scripts and tools to improve system efficiency and reduce manual intervention. Monitoring and Incident Response: Implement monitoring solutions and respond to incidents to minimize downtime and ensure quick recovery. Collaboration: Work closely with development and operations teams to improve system reliability and performance. Capacity Planning: Conduct capacity planning and performance tuning to ensure systems can handle future growth. Documentation: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Qualifications Education: Bachelor’s degree in computer science, Engineering, or a related field. Experience: 4+ years of experience in site reliability engineering or a similar role. Skills Proficiency in Google Cloud services (Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, etc.). Familiarity with Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.) Experience with automation tools (Terraform, Ansible, Puppet). Familiarity with CI/CD pipelines and tools (Azure pipelines Jenkins, GitLab CI, etc.). Strong scripting skills (Python, Bash, etc.). Knowledge of networking concepts and protocols. Experience with monitoring tools (Prometheus, Grafana, etc.). Preferred Certifications Google Cloud Professional DevOps Engineer Google Cloud Professional Cloud Architect Red Hat Certified Engineer (RHCE) or similar Linux certification Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 5 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: We are seeking a highly skilled MLOps Engineer to design, deploy, and manage machine learning pipelines in Google Cloud Platform (GCP). In this role, you will be responsible for automating ML workflows, optimizing model deployment, ensuring model reliability, and implementing CI/CD pipelines for ML systems. You will work with Vertex AI, Kubernetes (GKE), BigQuery, and Terraform to build scalable and cost-efficient ML infrastructure. The ideal candidate must have a good understanding of ML algorithms, experience in model monitoring, performance optimization, Looker dashboards and infrastructure as code (IaC), ensuring ML models are production-ready, reliable, and continuously improving. You will be interacting with multiple technical teams, including architects and business stakeholders to develop state of the art machine learning systems that create value for the business. Responsibilities Managing the deployment and maintenance of machine learning models in production environments and ensuring seamless integration with existing systems. Monitoring model performance using metrics such as accuracy, precision, recall, and F1 score, and addressing issues like performance degradation, drift, or bias. Troubleshoot and resolve problems, maintain documentation, and manage model versions for audit and rollback. Analyzing monitoring data to preemptively identify potential issues and providing regular performance reports to stakeholders. Optimization of the queries and pipelines. Modernization of the applications whenever required Qualifications Expertise in programming languages like Python, SQL Solid understanding of best MLOps practices and concepts for deploying enterprise level ML systems. Understanding of Machine Learning concepts, models and algorithms including traditional regression, clustering models and neural networks (including deep learning, transformers, etc.) Understanding of model evaluation metrics, model monitoring tools and practices. Experienced with GCP tools like BigQueryML, MLOPS, Vertex AI Pipelines (Kubeflow Pipelines on GCP), Model Versioning & Registry, Cloud Monitoring, Kubernetes, etc. Solid oral and written communication skills and ability to prepare detailed technical documentation of new and existing applications. Strong ownership and collaborative qualities in their domain. Takes initiative to identify and drive opportunities for improvement and process streamlining. Bachelor’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications Experience in Azure MLOPS, Familiarity with Cloud Billing. Experience in setting up or supporting NLP, Gen AI, LLM applications with MLOps features. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 5 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

About SAIGroup SAIGroup is a private investment firm that has committed $1 billion to incubate and scale revolutionary AI-powered enterprise software application companies. Our portfolio, a testament to our success, comprises rapidly growing AI companies that collectively cater to over 2,000+ major global customers, approaching $800 million in annual revenue, and employing a global workforce of over 4,000 individuals. SAIGroup invests in new ventures based on breakthrough AI-based products that have the potential to disrupt existing enterprise software markets. SAIGroup’s latest investment, JazzX AI , is a pioneering technology company on a mission to shape the future of work through an AGI platform purpose-built for the enterprise. JazzX AI is not just building another AI tool—it’s reimagining business processes from the ground up, enabling seamless collaboration between humans and intelligent systems. The result is a dramatic leap in productivity, efficiency, and decision velocity, empowering enterprises to become pacesetters who lead their industries and set new benchmarks for innovation and excellence. Job Title: AGI Solutions Engineer (Junior) – GTM Solution Delivery (Full-time Remote-first with periodic travel to client sites & JazzX hubs) Role Overview As an Artificial General Intelligence Engineer you are the hands-on technical force that turns JazzX’s AGI platform into working, measurable solutions for customers. You will: Build and integrate LLM-driven features, vector search pipelines, and tool-calling agents into client environments. Collaborate with solution architects, product, and customer-success teams from discovery through production rollout. Contribute field learnings back to the core platform, accelerating time-to-value across all deployments. You are as comfortable writing production-quality Python as you are debugging Helm charts, and you enjoy explaining your design decisions to both peers and client engineers. Key Responsibilities Focus Area What You’ll Do Solution Implementation Develop and extend JazzX AGI services (LLM orchestration, retrieval-augmented generation, agents) within customer stacks. Integrate data sources, APIs, and auth controls; ensure solutions meet security and compliance requirements. Pair with Solution Architects on design reviews; own component-level decisions. Delivery Lifecycle Drive proofs-of-concept, pilots, and production rollouts with an agile, test-driven mindset. Create reusable deployment scripts (Terraform, Helm, CI/CD) and operational runbooks. Instrument services for observability (tracing, logging, metrics) and participate in on-call rotations. Collaboration & Support Work closely with product and research teams to validate new LLM techniques in real-world workloads. Troubleshoot customer issues, triage bugs, and deliver patches or performance optimisations. Share best practices through code reviews, internal demos, and technical workshops. Innovation & Continuous Learning Evaluate emerging frameworks (e.g., LlamaIndex, AutoGen, WASM inferencing) and pilot promising tools. Contribute to internal knowledge bases and GitHub templates that speed future projects. Qualifications Must-Have 2+ years of professional software engineering experience; 1+ years working with ML or data-intensive systems. Proficiency in Python (or Java/Go) with strong software-engineering fundamentals (testing, code reviews, CI/CD). Hands-on experience deploying containerised services on AWS, GCP, or Azure using Kubernetes & Helm. Practical knowledge of LLM / Gen-AI frameworks (LangChain, LlamaIndex, PyTorch, or TensorFlow) and vector databases. Familiarity integrating REST/GraphQL APIs, streaming platforms (Kafka), and SQL/NoSQL stores. Clear written and verbal communication skills; ability to collaborate with distributed teams. Willingness to travel 10–20 % for key customer engagements. Nice-to-Have Experience delivering RAG or agent-based AI solutions in regulated domains (finance, healthcare, telecom). Cloud or Kubernetes certifications (AWS SA-Assoc/Pro, CKA, CKAD). Exposure to MLOps stacks (Kubeflow, MLflow, Vertex AI) or data-engineering tooling (Airflow, dbt). Attributes Empathy & Ownership: You listen carefully to user needs and take full ownership of delivering great experiences. Startup Mentality: You move fast, learn quickly, and are comfortable wearing many hats. Detail-Oriented Builder: You care about the little things Mission-Driven: You want to solve important, high-impact problems that matter to real people. Team-Oriented: Low ego, collaborative, and excited to build alongside highly capable engineers, designers, and domain experts. Travel This position requires the ability to travel to client sites as needed for on-site deployments and collaboration. Travel is estimated at approximately 20–30% of the time (varying by project), and flexibility is expected to accommodate key client engagement activities. Why Join Us At JazzX AI, you have the opportunity to join the foundational team that is pushing the boundaries of what’s possible to create an autonomous intelligence driven future. We encourage our team to pursue bold ideas, foster continuous learning, and embrace the challenges and rewards that come with building something truly innovative. Your work will directly contribute to pioneering solutions that have the potential to transform industries and redefine how we interact with technology. As an early member of our team, your voice will be pivotal in steering the direction of our projects and culture, offering an unparalleled chance to leave your mark on the future of AI. We offer a competitive salary, equity options, and an attractive benefits package, including health, dental, and vision insurance, flexible working arrangements, and more.

Posted 5 days ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Digital Engineer – AI/ML & Automation Location: Hyderabad, Telangana Experience : 3 to 6 years of hands-on experience in digital engineering, business intelligence, AI/ML solutions, or automation technologies. Employment Type: Full-Time | 6 Days Working Notice Period: Immediate Joiners Preferred Key Responsibilities AI/ML & Generative AI Solutions : Design, develop, and deploy AI/ML models, intelligent agents, and tools using LLMs, Vertex AI, OpenAI APIs, and other cloud-based AI platforms. Data Engineering & Integration : Build and optimize data pipelines leveraging data lakes, data warehouses, and both structured and unstructured data sources to ensure high-quality, accessible data for analytics and automation. Business Intelligence (BI) : Develop and maintain interactive dashboards using Power BI and SAP Analytics Cloud (SAC), integrated with SAP S/4HANA and other enterprise systems. Application Development : Build responsive, user-centric web and mobile applications using low-code/no-code platforms such as Microsoft Power Apps, Google AppSheet, or equivalent tools. Automation & RPA : Identify and implement intelligent workflow automation using tools like UiPath, Zapier, Python scripts, and conversational AI agents (e.g., ChatGPT). Digital Process Optimization : Design and implement digital SOPs, automate business workflows, and support enterprise-wide digital transformation initiatives. Candidate Profile Technical Skills Proficient in Power BI or SAP Analytics Cloud dashboard development Hands-on with LLMs, generative AI tools, and APIs (e.g., ChatGPT, OpenAI) Skilled in Python, REST APIs, and RPA tools (e.g., UiPath) Strong grasp of SQL/NoSQL, data modeling, and API integrations Platform Expertise Experience with low-code platforms (Power Apps, AppSheet) Familiar with WordPress/Drupal CMS, DNS, and web hosting Working knowledge of Google Workspace administration

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

noida, uttar pradesh

On-site

Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With a workforce of over 125,000 individuals across 30+ countries, we are driven by innate curiosity, entrepreneurial agility, and a commitment to creating lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, guides us as we serve and transform leading enterprises worldwide, including the Fortune Global 500. We leverage our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI to drive growth and success. We are currently seeking applications for the position of Management Trainee/Assistant Manager in US Sales and Use Tax compliance. In this role, you will be responsible for ensuring the timeliness and accuracy of all deliverables. You will provide guidance to your team on the correct accounting treatment and act as an escalation point when necessary. Your primary responsibilities will include: - Coordinating with the Business License and Sales Tax teams to execute Business license renewals - Tracking and adhering to US Property Tax Returns filing deadlines - Managing exemptions and exceptions for applicable states - Monitoring new store openings and closures - Reconciling assessment notices to the PTMS Property Manager - Handling PPTX accrual accounts and booking monthly accrual and adjustment entries - Managing vendor queries and maintaining related documents for US Sales Tax - Investigating and resolving open items by collaborating with different teams - Ensuring adherence to internal and external US GAAP/SOX audits Qualifications we are looking for: Minimum qualifications: - Accounting graduates with relevant experience, CA/CMA preferred - Strong written and verbal communication skills - Working experience with ERPs, particularly Oracle, is preferred - Familiarity with tools like Alteryx, AS400, PTMS, Vertex, Sovos, Bloomberg sites, and Middleware for tax rates - Previous experience with Retail Clients of similar size is preferred - Strong interpersonal skills, ability to manage complex tasks, and effective communication Preferred qualifications: - Strong accounting and analytical skills - Good understanding of accounting GAAP principles, preferably US GAAP - Ability to prioritize tasks, multitask, and drive projects to completion - Proficiency in Microsoft Excel and other applications - Experience collaborating with systems, customers, and key stakeholders - Previous experience working remotely in a US time zone is a plus This is a full-time position based in India-Noida. The ideal candidate will hold a Bachelor's degree or equivalent and should be able to demonstrate mastery in Operations. If you meet the qualifications and are ready to make an impact, we encourage you to apply.,

Posted 6 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Specialty Development Consultant 34321 Job Type: Contract Location: Chennai Budget: ₹23 LPA Notice Period: Immediate Joiners Only Role Overview We are looking for a Specialty Development Consultant with hands-on experience in full stack development using React or Angular , cloud deployment, and DevOps practices. The ideal candidate will play a critical role in designing and delivering end-to-end scalable applications, collaborating with cross-functional teams and integrating modern ML platforms with cloud services. Key Responsibilities Develop full stack applications using React/Angular and integrate with cloud-native components. Collaborate with Tech Anchors, Product Managers, and cross-functional teams to implement business features end-to-end. Understand and incorporate technical, functional, non-functional, and security requirements into the software design and delivery. Apply Test-Driven Development (TDD) methodologies to ensure code quality and maintainability. Work extensively with Google Cloud Platform (GCP) products and services for deployment and scalability. Integrate with open-source tools and frameworks to support ML platform integration. Participate in CI/CD workflows using Tekton, SonarQube, and Terraform. Mandatory Skills React and/or Angular (3+ years) Full Stack Development Google Cloud Platform (GCP) Tekton, SonarQube, Terraform, GCS Experience with Kubernetes or OpenShift (preferred) Strong grasp of CICD, DevOps, and cloud-native development practices Experience Required 2 to 5 years overall software development experience Minimum 3 years of hands-on experience in React/Angular full stack development Experience with GCP products and deployments, including Vertex AI and GCS Proven track record in API development, cloud deployments, and integrating with modern DevOps pipelines Education Bachelor's Degree in Computer Science, Engineering, or related technical field Skills: react,google cloud platform (gcp),devops,full stack development,cloud,cloud-native development,kubernetes,terraform,cicd,tekton,openshift,angular,sonarqube

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. € 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. Responsibilities SAP S/4 Hana FI TAX with Vertex knowledge Technical Skill sets Should have worked on at least one Implementation & two support projects on SAP S/4 HANA with tax in O2C and P2P. Should have good experience with withholding tax (TDS) and VAT. Must have experience in VAT configuration such as tax procedures, tax keys, tax conditions and input/output tax codes. Must have experience in withholding tax configuration such as WHT codes, types, keys, master data. Perform systems review and analysis for the conversion of in-house developed business applications, master data, and re­ engineer business practices to facilitate standardization to a single SAP platform. Responsible for the SAP configuration for external tax calculation 02C and P2P. Configuration of SAP Pricing with Tax Procedures for business organizations. Develop and update business process documentation utilizing confidential technical WRICEF Project Management methodology. Completed process flow documentation for support organization and end-user guides - illustrated BPP and FAQ sheets. Develop close-loop regression testing procedure for inbound/outbound processing with legacy systems utilizing iDocs and XML documents Designed custom report for the balancing and reconciliation of SAP financial account data of tax and Vertex Reporting and Returns databases. Requirements Must be expert in writing Functional Specifications independently and create Custom Objects from Scratch to Deployments. Should have good experience on interfaces with third party systems. Vertex Must have knowledge on Vertex (Tax Engine) and mapping concept. Must have knowledge on tax calculations on Vertex and comparison to SAP S/4 Hana tax module Provide technical guidance for development and coding for industry specific excise tax processing, compliance and reporting General knowledge and tools Excellent communication & strong collaboration skills Flexible to adapt to fast changing environment and self-motivated Creating technical design specifications to ensure compliance with the functional teams and IT Management Analytical thinking, high level of comprehension and independent working style Seeking candidates who are flexible and willing to work on shifts as required What We Offer Competitive salary package. Leave Policies 10 Days of Public Holiday (Includes 2 days optional) & 22 days of Earned Leave (EL) & 11 days for sick or caregiving leave. Office Requirement 3 Days WFO Here at Atos, diversity and inclusion are embedded in our DNA. Read more about our commitment to a fair work environment for all. Atos is a recognized leader in its industry across Environment, Social and Governance (ESG) criteria. Find out more on our CSR commitment. Choose your future. Choose Atos.

Posted 6 days ago

Apply

4.0 years

4 - 20 Lacs

Chennai

On-site

Job Summary: We are seeking an experienced and results-driven GCP Data Engineer with over 4 years of hands-on experience in building and optimizing data pipelines and architectures using Google Cloud Platform (GCP). The ideal candidate will have strong expertise in data integration, transformation, and modeling, with a focus on delivering scalable, efficient, and secure data solutions. This role requires a deep understanding of GCP services, big data processing frameworks, and modern data engineering practices. Key Responsibilities: Design, develop, and deploy scalable and reliable data pipelines on Google Cloud Platform . Build data ingestion processes from various structured and unstructured sources using Cloud Dataflow , Pub/Sub , BigQuery , and other GCP tools. Optimize data workflows for performance, reliability, and cost-effectiveness. Implement data transformations, cleansing, and validation using Apache Beam , Spark , or Dataflow . Work closely with data analysts, data scientists, and business stakeholders to understand data needs and translate them into technical solutions. Ensure data security and compliance with company and regulatory standards. Monitor, troubleshoot, and enhance data systems to ensure high availability and accuracy. Participate in code reviews, design discussions, and continuous integration/deployment processes. Document data processes, workflows, and technical specifications. Required Skills: Minimum 4 years of experience in data engineering with at least 2 years working on GCP . Strong proficiency in GCP services such as BigQuery , Cloud Storage , Dataflow , Pub/Sub , Cloud Composer , Cloud Functions , and Vertex AI (preferred). Hands-on experience in SQL , Python , and Java/Scala for data processing and transformation. Experience with ETL/ELT development, data modeling, and data warehousing concepts. Familiarity with CI/CD pipelines , version control (Git), and DevOps practices. Solid understanding of data security, IAM, encryption, and compliance within cloud environments. Experience with performance tuning, workload management, and cost optimization in GCP. Preferred Qualifications: GCP Professional Data Engineer Certification. Experience with real-time data processing using Kafka , Dataflow , or Pub/Sub . Familiarity with Terraform , Cloud Build , or infrastructure-as-code tools. Exposure to data quality frameworks and observability tools. Previous experience in an agile development environment. Job Types: Full-time, Permanent Pay: ₹473,247.51 - ₹2,000,000.00 per year Schedule: Monday to Friday Application Question(s): Mention Your Last Working Date Experience: Google Cloud Platform: 4 years (Preferred) Python: 4 years (Preferred) ETL: 4 years (Preferred) Work Location: In person

Posted 6 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

General Summary: The Senior AI Engineer (2–5 years' experience) is responsible for designing and implementing intelligent, scalable AI solutions with a focus on Retrieval-Augmented Generation (RAG), Agentic AI, and Modular Cognitive Processes (MCP). This role is ideal for individuals who are passionate about the latest AI advancements and eager to apply them in real-world applications. The engineer will collaborate with cross-functional teams to deliver high-quality, production-ready AI systems aligned with business goals and technical standards Essential Duties & Responsibilities: Design, develop, and deploy AI-driven applications using RAG and Agentic AI frameworks. Build and maintain scalable data pipelines and services to support AI workflows. Implement RESTful APIs using Python frameworks (e.g., FastAPI, Flask) for AI model integration. Collaborate with product and engineering teams to translate business needs into AI solutions. Debug and optimize AI systems across the stack to ensure performance and reliability. Stay current with emerging AI tools, libraries, and research, and integrate them into projects. Contribute to the development of internal AI standards, reusable components, and best practices. Apply MCP principles to design modular, intelligent agents capable of autonomous decision-making. Work with vector databases, embeddings, and LLMs (e.g., GPT-4, Claude, Mistral) for intelligent retrieval and reasoning. Participate in code reviews, testing, and validation of AI components using frameworks like pytest or unittest. Document technical designs, workflows, and research findings for internal knowledge sharing. Adapt quickly to evolving technologies and business requirements in a fast-paced environment. Knowledge, Skills, and/or Abilities Required: 2–5 years of experience in AI/ML engineering, with at least 2 years in RAG and Agentic AI. Strong Python programming skills with a solid foundation in OOP and software engineering principles. Hands-on experience with AI frameworks such as LangChain, LlamaIndex, Haystack, or Hugging Face. Familiarity with MCP (Modular Cognitive Processes) and their application in agent-based systems. Experience with REST API development and deployment. Proficiency in CI/CD tools and workflows (e.g., Git, Docker, Jenkins, Airflow). Exposure to cloud platforms (AWS, Azure, or GCP) and services like S3, SageMaker, or Vertex AI. Understanding of vector databases (e.g., OpenSearch, Pinecone, Weaviate) and embedding techniques. Strong problem-solving skills and ability to work independently or in a team. Interest in exploring and implementing cutting-edge AI tools and technologies. Experience with SQL/NoSQL databases and data manipulation. Ability to communicate technical concepts clearly to both technical and non-technical audiences. Educational/Vocational/Previous Experience Recommendations: Bachelor/ Master degree or related field. 2+ years of relevant experience Working Conditions: Hybrid - Pune Location

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

The key responsibilities for this position include collaborating with finance, IT, and operations teams to ensure end-to-end U.S. indirect tax compliance. You will be responsible for developing and maintaining SOPs, internal controls, and documentation for indirect tax processes. Additionally, you will manage tax determination logic, engine configurations (Avalara, Vertex), and ERP mapping (SAP/Oracle). In this role, you will oversee compliance for U.S. sales & use tax, property tax, business licenses, and exemption certificates. You will also be expected to support tax audits, notices, nexus reviews, and state authority responses. Furthermore, you will serve as a subject matter expert on U.S. transaction taxes for internal teams and auditors. If you are interested in this position, please contact us at 9205999380.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As an Associate Consultant in the SALT Tax Tech team based in Bangalore, you will be an Individual Contributor reporting to the Manager. You will primarily support the US region and work from 11:30 AM to 8:30 PM IST. Your main responsibilities will include applying your functional and technical knowledge of SAP and third-party tax engines like Onesource/Vertex. Your role will involve establishing and maintaining relationships with business leaders, driving engagement on major tax integration projects, gathering business requirements, leading analysis, and driving high-level E2E design. You will also be responsible for driving configuration and development activities to meet business requirements, managing external software vendors and System Integrators, and ensuring adherence to established SLAs. In addition, you will take a domain lead role on IT projects to ensure all business stakeholders are included and receive sufficient and timely communications. Providing leadership to teams, integrating technical expertise and business understanding to create superior solutions, consulting with team members and other organizations, customers, and vendors on complex issues, and mentoring others in the team on process/technical issues are also part of your responsibilities. You are expected to have a BE/B.Tech/MCA qualification with 4 to 7 years of relevant work experience. Mandatory skills for this role include SAP, Onesource, Vertex, and indirect tax integration. Preferred skills include knowledge of indirect tax concepts, SAP native tax, and hands-on experience in Tax Engine Integration with ERP systems. Key behavioural attributes required for this role include hands-on experience in Tax Engine Integration with ERP systems, understanding of indirect tax concepts, good knowledge of O2C and P2P processes, experience in tax engine configurations, troubleshooting tax-related issues, and solution design. Knowledge of Avalara and VAT compliance is considered a plus. The interview process for this role includes 2 technical rounds followed by an HR round. Travel may be involved in this role, while the busy season does not apply.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies