No description available.
Pune
INR 8.0 - 18.0 Lacs P.A.
Remote
Full Time
Location : Remote - Work from home Shift : Primarily 02:00 PM 10:30 PM IST (with flexibility for overlap and critical escalations across time zones as needed) Experience : Minimum 5 years in technical support, including 2-3 years in a leadership position Overview Are you driven by the thrill of solving complex client challenges in a dynamic and innovative tech landscape? Join us! Were looking for a skilled and proactive Support Manager to lead our technical support team across multiple levels (L1 L2 & L3 Support). This role requires a strong background in AWS services and a practical understanding of managed IT support operations. The ideal candidate will not only guide the team in adherence to established processes and procedures but also actively engage in problem-solving and continuous process optimization. This is a hands-on leadership position, ensuring seamless service delivery and prompt issue resolution in accordance with defined Service Level Agreements (SLAs). Key Responsibilities: Lead & Develop Support Team: Oversee, manage, mentor, and schedule a distributed L1, L2, and L3 support team, optimizing workload and resource allocation to meet service targets. Drive Incident & Problem Resolution: Ensure timely incident resolution, adhere to SLAs and escalation procedures, serve as a key escalation point for critical technical issues, and lead post-incident reviews (RCA). Optimize Service Delivery: Manage and improve support operations (Incident, Problem, Change Management), identify trends, and implement process enhancements or automation, adjusting staffing as operations scale (e.g., to 16x5, 24x7). Ensure Effective Communication & Reporting: Establish client communication channels, lead weekly/monthly review meetings to discuss performance and trends, and prepare regular reports on key service metrics. Skills and Experience Required - Proven experience managing technical support teams in fast-paced environments Strong knowledge of core AWS Cloud services such as EC2, Lambda, RDS, S3, IAM, and CloudWatch Practical understanding of ITIL-based workflows (Incident, Problem, and Change Management) Hands-on experience with ticketing platforms like Jira, Freshdesk, or Zendesk Calm and confident; skilled at managing team performance and customer expectations Ability to manage and oversee support across multiple shifts, including potential night and weekend coverage as operations scale to 16x5 or 24x7. Partner with fellow Support Managers and Support Leads to strategically scale and optimize team resources as operations expand. Excellent communication skills Bonus Points If You Have - AWS certifications Familiarity with observability tools like Datadog, ELK, or Prometheus, especially a combination of platform native, 3rd party, and internally developed tools for monitoring and alerting. Experience with CI/CD processes, Linux system administration, and basic scripting (Python or Bash) Working knowledge of tools like Jenkins, Git, or Docker How to Apply: Visit Link To Apply - https://bit.ly/arosupmgr We admire the new age methods like a video resume. If you have one please share the URL along with your application / resume. Your privacy is important to us, so we will not be contacting candidates by phone. If your application is selected, we will email you a link to schedule your interview at your convenient time. Please check your emails regularly, including your spam folder. Arocom encourages WFH and has a Bring Your Own Device (BYOD) policy. Employees / Consultants working from home are expected to use their personal laptops/desktops for work-related tasks.
Pune
INR 6.5 - 15.0 Lacs P.A.
Remote
Full Time
Location: Any / Remote / Work From Home Employment Type: Full-time Job Summary: We are seeking skilled and innovative AI Engineers with a minimum 2-4 years of experience in developing and deploying AI solutions . The ideal candidate should have practical knowledge in AWS , Amazon Bedrock , LLMs , Claude AI , and Custom AI Development . You will contribute to the design and implementation of cutting-edge Generative AI (GenAI) and Mainstream AI applications across diverse business domains. This is an excellent opportunity to work with emerging technologies in an agile, fast-paced environment with full ownership of AI engineering pipelines. Key Responsibilities: Develop custom machine learning models and fine-tune foundation models for enterprise use Design and implement LLM-based and Generative AI solutions tailored to client use cases Work with OpenAI, Anthropic, HuggingFace models and their APIs to build secure, scalable AI systems Perform prompt engineering and experiment with parameter tuning and context optimization Build APIs and backend services to integrate AI models into product workflows Collaborate with product, data, and DevOps teams to deliver end-to-end AI-powered solutions Ensure best practices for AI model governance, security, performance, and ethical use Stay current with advancements in LLMs, vector databases (e.g., Pinecone, FAISS), and GenAI tooling Document architecture, experiments, and processes clearly and effectively Participate in peer reviews, client meetings, and cross-functional planning discussions Required Qualifications: Minimum 2-4 years of professional experience in AI/ML , with specific focus on LLMs and GenAI Strong experience working on AWS Bedrock , Lambda, and related services Proficiency in Python , including experience with libraries like HuggingFace, LangChain, etc. Familiarity with Anthropic , OpenAI , or other foundation models in production settings Experience with custom AI model development and model fine-tuning Good understanding of cloud-native AI deployments and API integrations Ability to apply prompt engineering techniques to optimize LLM responses Exposure to vector databases and RAG pipelines Strong problem-solving, communication, and team collaboration skills Preferred Qualifications (STRONG PLUS): Hands-on experience with LangChain, RAG pipelines, or agent frameworks Familiarity with AWS Sagemaker, DynamoDB, Lambda, or other AI tooling in AWS Knowledge of Docker or Kubernetes for containerizing AI workloads Understanding of MLOps principles and tools (MLflow, SageMaker Pipelines, etc.) Contributions to open-source AI projects or published GenAI applications AI/ML certifications Familiarity with enterprise use cases in biotech, healthcare, or manufacturing What We Offer: Remote work option Competitive salary package Exciting projects in the field of LLMs, AI, and cloud-native development Continuous learning opportunities in Generative AI and cloud AI tools A collaborative and fast-paced environment with a forward-thinking team Access to state-of-the-art tools and flexible project ownership How to Apply: Please submit your resume and cover letter outlining your relevant experience and why you'd be a great fit for Arocom. We differentiate candidates based on professionalism. We admire the new age methods like a video resume. If you have one please share the URL along with your resume. Your privacy is important to us, so we will not be contacting candidates by phone. If your application is selected, we will email you a link to schedule your interview at your convenient time. Please check your emails regularly, including your SPAM folder. Arocom encourages work from home and has a Bring Your Own Device (BYOD) policy. Employees / Consultants working from home are expected to use their personal laptops/desktops for work-related tasks.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Chrome Extension