Jobs
Interviews

64266 Devops Jobs - Page 47

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

India

Remote

This position is posted by Jobgether on behalf of Exusia. We are currently looking for a Software Quality Assurance Engineer / Lead in India. We are seeking an experienced Software QA Engineer / Lead to join a dynamic, fully remote technology team. In this role, you will ensure high-quality software delivery through both manual and automated testing methodologies. You will collaborate closely with developers, architects, and product teams to design and implement test strategies that optimize efficiency and reliability across the product lifecycle. This position offers the opportunity to work on cutting-edge projects, drive quality improvement initiatives, and contribute to the success of high-priority software releases in a fast-paced, agile environment. Accountabilities Develop and implement comprehensive test strategies, test plans, and test cases based on user stories and requirements Execute manual and automated testing to validate software functionality, performance, and security Design, develop, and maintain automated testing frameworks and scripts to ensure scalable and reusable solutions Perform data validation and reconciliation using SQL and scripting languages Conduct non-functional testing, including performance, usability, vulnerability, load, and compatibility testing Collaborate with DevOps teams to implement continuous integration and continuous delivery pipelines Track quality metrics, log defects, and ensure adherence to QA standards throughout the project lifecycle Work closely with stakeholders, including QA Leads, Developers, Architects, and Product Owners, to ensure software meets quality expectations Participate in agile ceremonies, including daily stand-ups, to manage issues, risks, and priorities Requirements Bachelor's or Master's degree in Computer Science or related field 4+ years of hands-on experience in software testing and quality assurance Strong knowledge of both manual and automated testing methodologies Expertise in scripting languages such as Python, Linux shell scripting, or equivalent Strong SQL skills to support data validation and reconciliation processes Experience with testing tools such as Selenium, Appium, RestAssured, Karate, JMeter, or Jira Familiarity with object-oriented programming languages (e.g., C#, Java) Experience with mobile test automation, SOAP and RESTful services testing Exposure to non-functional testing, including performance, usability, vulnerability, and load testing Understanding of Agile Scrum, DevOps methodologies, and CI/CD pipelines Nice-to-have: ISTQB or CSTE certification, cloud testing knowledge (Azure), experience in Banking or Lending domains Excellent analytical, problem-solving, and collaboration skills Benefits Fully remote work with flexible scheduling to support work-life balance Opportunity to work on high-impact, cutting-edge software projects in a collaborative environment Exposure to modern QA tools, automation frameworks, and best practices Professional growth through mentorship and engagement with experienced technology leaders Competitive compensation package (discussed during the recruitment process) Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching. When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly. 🔍 Our AI thoroughly evaluates your CV and LinkedIn profile, analyzing your skills, experience, and achievements. 📊 It compares your profile to the job's key requirements and historical success factors to calculate your match score. 🎯 The top three candidates with the highest match are automatically shortlisted for the role. 🧠 If needed, our human team may review applications to ensure no strong profile is overlooked. The process is transparent, skills-based, and free from bias—focused solely on your fit for the role. Once the shortlist is finalized, it is shared directly with the company managing the vacancy. Their internal hiring team makes the final decision and handles next steps such as interviews or additional assessments. Thank you for your interest!

Posted 1 day ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview Software Engineer-I will be involved in the development of software technologies for medical devices. The right candidate will be proactive, with great communication skills, demonstrate attention to details, have a passion for technology, and an excitement to produce great products. Software Engineer-I shall be responsible for the development of software projects associated with Spacelab’s product development activities. Personal development skills in requirements definition, design, implementation, and testing/debugging are essential. Participation in planning, requirements analysis, and coordination with leads, must be comfortable in all phases of the software development life cycle (SDLC),Willing to contribute to Integration testing and system test on need basis. Responsibilities Adhere to Software development process and medical device standards (IEC 62304). Complete assigned tasks on time and in accordance with the appropriate process, including all QMS and regulatory requirements. Assist in defining and reviewing requirements and use cases. Find creative solutions from broadly defined problems or directives. Requirement’s analysis and generation. Configure, build, and test the application or technical architecture components. Fix any defects and performance problems discovered during testing. Cultivate and maintain knowledge of system integration and involve in integration test activities to find the integration issues and fix them. Good hands on with Integration and system tests and willing to participate. Ensure that all project tasks and deliverables conform to the appropriate processes and procedures. Ensure all software components unit/integration tested. Demonstrate ownership and responsibility for assigned tasks. Proactively communicate inside and outside the development team. Uphold Spacelabs values of Customer Obsession, Ownership Mindset and Superior Results. Demonstrate behavior consistent with the Company’s Code of Ethics and Conduct. It is the responsibility of every Spacelabs Healthcare employee to report to their manager or a member of senior management any quality problems or defects in order for corrective action to be implemented and to avoid recurrence of the problem. Duties may be modified or assigned at any time to meet the needs of the business. Good written and oral communication skills. Good documentation skills and software process discipline. Qualifications Total Years of Experience: 4 + years. Significant Programming experience in C, C++ 11/14/17 Experience in Qt, QML. Hands-on object-oriented software design and development experience with a solid grasp of C++, data structures, algorithms, and design/UI patterns. Hands on experience in multithreading and Boost C++ libraries. Hands on experience in Linux. Experience in Azure DevOps and bug life cycle. Exceptional Debugging, Analytical and Problem-solving skills Collaborate with design engineers and clinical engineering team on translating product requirements into software design and create software specification documents. Experience is preferred in the medical device industry and good knowledge of FDA regulations. Scripting experience in Python and familiarity in working with Linux environment is desired. Working experience quickly to ramp-up on complex software components and ability to learn and deliver new languages/frameworks as required. Demonstrated experience in Design/Implementation for end-to-end medical device product development. B.E/B.Tech (M.E/M.Tech preferred) in the fields of ECE, CS or MCA degree. Certified Qt and QML Developer is a plus and C++ certification.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Cannanore, Kerala, India

On-site

Experience Level: 5+ Years 1. About the Role At Summit Solutions, we engineer enterprise-grade platforms that prioritize scalability, performance, and security. We are seeking a Backend Engineer (Python) with a strong background in modular architecture and modern backend technologies such as GraphQL and gRPC. You will design and implement scalable backend solutions, establish architecture plans for large-scale applications, and ensure seamless integrations using APIs and microservices. This role also involves DevOps (Azure), security-first design, and mentoring junior engineers to build high-performing teams. 2. What You’ll Do • Architect and develop modular, maintainable backend systems for enterprise-scale applications. • Build and maintain RESTful APIs, GraphQL endpoints, and gRPC services for high-performance communication. • Design and implement microservices-based architectures with modular components for scalability. • Work on cloud-native deployments using Azure DevOps pipelines, infrastructure automation, and containerization. • Implement security best practices, including secure authentication (OAuth2/JWT), data encryption, and compliance standards. • Collaborate with frontend, UI/UX, and DevOps teams to deliver seamless end-to-end solutions. • Optimize services for high throughput, low latency, and fault tolerance. • Conduct code reviews, lead technical discussions, and mentor junior developers. • Explore and introduce emerging technologies (e.g., event-driven architecture, service mesh) to enhance system performance and developer productivity. 3. What You’ll Need • 5+ years of professional experience in backend development with Python (Django, FastAPI, Flask). • Proficiency in GraphQL and gRPC, including schema design, resolvers, and service-to-service communication. • Expertise in modular architecture design, microservices, and enterprise application scalability. • Hands-on experience with Azure DevOps, CI/CD pipelines, Kubernetes (AKS), and Docker. • Solid understanding of application security, encryption, and secure API design. • Strong knowledge of SQL and NoSQL databases, caching strategies, and performance tuning. • Proven experience in system design and technical leadership for enterprise-grade applications. • Excellent problem-solving, communication, and mentoring skills. • Bonus: Familiarity with event-driven architecture (Kafka, RabbitMQ) and domain-driven design (DDD).

Posted 1 day ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description:- Technical Business Analyst (COUPA) Location:- LTIM Pan India Experience Required:- 10+ Years Notice Period:- Immedidate Joiner-1 Month Job Description:- This role would act as technical business analyst by leveraging in-depth technical understanding of the Coupa Platform (herein referred as “platform”) and its integrations. The role would act as the technical SME of the platform and would be accountable for scaling/change/programme management by working with the necessary stakeholders. Technical Ownership, Integration & Middleware Oversight, Data Migration Execution, Delivery Management, Stakeholder Management & Governance, DevOps Skills Essential Skills:- • Coupa certified • 10+ years of overall IT experience with strong background on Coupa Technical delivery roles, with proven experience in large-scale data migration and middleware integration. • Experience with integration technologies such as MuleSoft, Dell Boomi, Azure Integration Services, Kafka, or similar. • Proficient in ETL tools and practices (e.g., Informatica, Talend, SQL-based scripting). • Familiarity with cloud platforms (AWS, Azure, GCP) and hybrid integration strategies. • Strong problem-solving and analytical skills in technical and data contexts. • Ability to translate complex technical designs into business-aligned delivery outcomes. • Leadership in cross-functional and cross-technology environments. • Effective communicator capable of working with developers, data engineers, testers, and business stakeholders. • Experienced with IT Service Management tools like ServiceNow & Jira • Experience in managing and developing 3rd party business relationships Educational Qualifications • UG - B. Tech /B.E. or other equivalent technical qualifications Personal Attributes:

Posted 1 day ago

Apply

0.0 - 6.0 years

1 - 1 Lacs

Bengaluru, Karnataka

On-site

Azure CI/CD Engineer (Linux) Location: Bangalore/Hyderabad Type: Contract / Full-Time Experience: 6+ Years We are looking for an Azure CI/CD Engineer with strong expertise in Linux-based environments to design, implement, and maintain CI/CD pipelines for cloud-based applications. Key Responsibilities: Design, configure, and manage Azure DevOps CI/CD pipelines for Linux environments. Build, deploy, and automate application releases using Azure Pipelines , YAML , and Linux shell scripting . Integrate automated testing, security scans, and monitoring tools into CI/CD workflows. Manage infrastructure as code (IaC) using tools like Terraform or ARM templates . Collaborate with development teams to streamline builds, deployments, and environment provisioning. Troubleshoot and resolve build, deployment, and pipeline issues in production and staging environments. Required Skills: 6+ years of experience in CI/CD, DevOps, and automation. Strong hands-on experience with Azure DevOps in Linux environments. Proficiency in Shell scripting, Bash, and YAML . Knowledge of containerization (Docker, Kubernetes) and deployment best practices. Experience with monitoring tools ( Prometheus, Grafana , or similar). Familiarity with Git and branching strategies. Nice to Have: Azure certifications (AZ-400, AZ-104). Knowledge of security best practices in CI/CD pipelines. Job Types: Full-time, Contractual / Temporary Contract length: 6 months Pay: ₹110,000.00 - ₹120,000.00 per month Experience: Azure: 6 years (Required) CI/CD: 4 years (Required) Linux: 6 years (Preferred) Location: Bangalore, Karnataka (Required) Work Location: In person

Posted 1 day ago

Apply

0.0 - 2.0 years

3 - 4 Lacs

Ahmedabad, Gujarat

On-site

Job Title: QA Engineer Location: Ahmedabad Experience: 2 -3 years Company: JigNect Technologies Pvt. Ltd. About Us JigNect Technologies Pvt. Ltd. is a leading provider of software quality assurance and testing services . We employ cutting-edge testing strategies and advanced tools to deliver customized QA solutions that exceed client expectations. Our expertise spans diverse technologies, and we are committed to upholding the highest quality standards. At JigNect, we firmly believe that "Quality is not an act; it is a habit. Quality assurance is not just a phase; it is a way of life." Job Summary We are looking for a passionate and detail-oriented QA Engineer who has hands-on experience in both manual testing and test automation . You will be responsible for ensuring product quality through exploratory and automated testing. If you're someone who enjoys breaking things to make them better — we’d love to talk to you! Key Responsibilities Design, develop, and execute test plans, test cases, and test scripts (manual & automated). Perform functional, regression, integration, system, and UI testing. Develop and maintain automation frameworks using tools like Selenium, TestNG, JUnit, etc. Identify, log, and track bugs using tools like JIRA or similar. Collaborate with developers, product managers, and designers to ensure product quality. Analyze test results and provide actionable insights to improve product reliability. Work closely with DevOps/CI-CD pipelines to integrate automated testing. Required Skills & Qualifications Bachelor’s degree in Computer Science, Engineering, or related field. 2 -3 years of experience in Software QA (both manual and automation). Strong understanding of QA methodologies, SDLC, and STLC. Experience with automation tools such as Selenium, Postman, RestAssured, TestNG, etc. Familiarity with scripting/programming languages (Java, Python, JavaScript preferred). Experience with API testing and debugging tools. Good knowledge of bug tracking and test management tools (JIRA, TestRail, etc.). Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities. Why Join Us? At JigNect Technologies, we offer a dynamic and collaborative work environment with opportunities for professional growth. Our benefits include: ✔ Career Development & Growth Opportunities ✔ Health Insurance & Wellness Benefits ✔ Flexible Work Arrangements ✔ Engaging and Supportive Work Culture ✔ Team Events, Outings, and Celebrations Join us and be part of an organization where quality is a mindset, and innovation drives success! Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹400,000.00 per year Benefits: Flexible schedule Health insurance Provident Fund Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Test automation: 1 year (Preferred) QA : 2 years (Preferred) Work Location: In person

Posted 1 day ago

Apply

14.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position: AWS Solution Architect Experience: 14+ years Location: Pune Role Summary: We are seeking an experienced and customer-facing AWS Solution Architect to lead end-to-end AWS and DevOps solution design, development, and delivery. This role will act as the primary interface with AWS (OEM) to co-create solution offerings, stay aligned with the AWS partner ecosystem, and ensure seamless project execution for customers. The ideal candidate will combine deep technical expertise with strong customer relationship skills and project delivery ownership . Key Responsibilities: 1. Solution Design & Architecture Lead the architecture, design, and implementation of scalable, secure, and high-performance AWS cloud solutions for diverse customer needs. Design DevOps-enabled architectures , incorporating CI/CD pipelines, infrastructure as code, automation, and monitoring solutions. Translate customer business requirements into detailed technical architectures, roadmaps, and implementation plans. Select optimal AWS services for use cases, balancing cost, performance, security, and scalability. 2. AWS OEM Engagement Act as the primary point of contact with AWS partner and OEM teams to co-create joint go-to-market solution offerings. Participate in AWS partner programs, competency assessments, and co-sell initiatives. Stay updated on AWS product releases, services, and partner incentives to incorporate into solution offerings. Collaborate with AWS Partner Development Managers and Solution Architects to align on technology roadmaps and certifications. 3. Customer Engagement & Technical Leadership Serve as the go-to technical advisor for AWS and DevOps engagements with customers. Lead technical discovery workshops, solution presentations, and proof-of-concepts (POCs). Build trusted advisor relationships with customer CXOs, architects, and technical leads. Provide thought leadership to customers on AWS best practices, cloud transformation strategies, and cost optimization. 4. Pre-sales and Delivery Ownership Interface with Customers for pre-sales activities, solutioning with conviction, own pre-contract engagement cycle and expert level interface with OEM/ Customers Take end-to-end ownership of AWS and DevOps project delivery — from requirements gathering and design to deployment and handover. Collaborate with delivery teams to ensure solutions are implemented as designed, on time, and within budget. Define and enforce delivery governance practices, ensuring quality, performance, and compliance with SLAs. Manage delivery escalations, risks, and change requests. 5. Best Practices, Standards & Enablement Define and evangelize AWS and DevOps best practices internally and for customers. Develop reusable solution blueprints, reference architectures, and technical assets. Mentor engineering teams, guiding them in implementing AWS and DevOps solutions effectively. Contribute to capability building, including certifications, training programs, and practice growth. Required Skills & Experience: 14+ years of IT experience, with at least 5+ years in AWS architecture & solution delivery . Proven experience in customer-facing solution architecture and project delivery leadership . Strong knowledge of AWS core services: EC2, S3, VPC, RDS, Lambda, ECS/EKS, CloudFormation/Terraform, IAM, CloudFront, and related services. Experience with DevOps tools & practices : Jenkins, GitLab CI/CD, Docker, Kubernetes, Ansible, Terraform, and monitoring tools like CloudWatch, Prometheus, Grafana. Demonstrated ability to interface with OEMs (preferably AWS) for joint solutioning, co-sell programs, and go-to-market strategies. Excellent communication, presentation, and stakeholder management skills. AWS Certification(s) required: AWS Certified Solutions Architect – Professional (mandatory) AWS Certified DevOps Engineer – Professional (preferred) Familiarity with multi-cloud and hybrid cloud strategies. Experience with security and compliance frameworks (ISO 27001, SOC 2, HIPAA, GDPR). Exposure to container orchestration, serverless architectures, and AI/ML services on AWS. Success Measures: Positive customer satisfaction (CSAT) scores and repeat business from AWS engagements. Increased AWS OEM alignment through co-branded solutions and partner program milestones. Successful end-to-end delivery of AWS and DevOps projects with minimal variances in cost, scope, and schedule. Team enablement and growth in AWS certifications and delivery capabilities.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We’re looking for a diligent and innovative Technical Lead to guide our projects’ technical direction and ensure the successful delivery of multiple high-quality software solutions. You’ll lead and mentor developers, collaborate with stakeholders, design scalable systems, and drive technical excellence. What you’ll do: Lead, mentor, and provide technical guidance to the development team. Collaborate with stakeholders to define requirements and technical solutions. Design and architect scalable, secure, and maintainable systems. Implement best practices, coding standards, and quality processes. Conduct code reviews and troubleshoot technical issues. Stay updated on emerging technologies and advocate for continuous improvement. What we’re looking for: Bachelor’s degree in Computer Science/Engineering (Master’s preferred). 5+ years in software development with strong technical expertise. Experience with Agile, cloud technologies (AWS/Azure/GCP), databases, and system architecture. Excellent leadership, communication, and problem-solving skills. Familiarity with DevOps, distributed teams, and CI/CD is a plus. Relevant certifications (e.g., AWS Solutions Architect, Scrum Master) preferred. Send your Resumes/CVs to: recruiter@velloni.com

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role Summary The Enterprise Cloud Analyst L2 will manage and optimize AWS and Azure cloud environments, ensuring high availability, security, and cost efficiency. The role involves provisioning, monitoring, and supporting both public and private cloud infrastructure, working closely with cross-functional teams to deliver reliable, scalable solutions. The position is part of the Infrastructure Services Team, supporting business objectives through world-class infrastructure operations. Responsibilities Manage AWS/Azure VM environments, applying best practices for deployment and maintenance. Provision, monitor, and automate using Terraform, CloudFormation, Docker, Puppet, and Python scripts. Support public/private cloud migrations and optimize system performance. Implement security policies, capacity planning, and cost optimization strategies. Use monitoring tools (Nagios, New Relic, AWS CloudWatch, and Grafana) for proactive issue resolution. Collaborate with internal teams to ensure timely project delivery. Maintain system documentation and participate in on-call rotations. Skills Experience 5+ years in cloud infrastructure management (AWS Azure). Strong knowledge of Microsoft services (AD, DNS, DHCP, Azure AD) and Linux administration. Database experience (MSSQL, MySQL) with monitoring and maintenance skills. Proficiency in Python, Shell, and PowerShell scripting. Familiarity with DevOps tools, automation, and middleware technologies. Experience with on-premise to cloud migrations and data center infrastructure. Strong communication, teamwork, and problem-solving abilities. Certifications Required: RedHat Certification, AWS Certified Solutions Architect Desirable: AZ-104 Microsoft Azure Administrator, MCSE Cloud Platform Infrastructure This job is provided by Shine.com

Posted 1 day ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Company Description Intellect Design Arena Ltd is a leading financial technology company that provides advanced solutions for financial institutions in 57 countries. With three decades of domain expertise, Intellect offers a comprehensive range of banking and insurance technology products. Their innovative enterprise platform, eMACH.ai, supports over 325+ customers worldwide. Intellect has a strong focus on Design Thinking, underscored by their 8012 FinTech Design Center, and is committed to driving digital transformation in the financial sector. Role Description This is a full-time on-site role for a Senior DevOps Engineer based in Noida. The Senior DevOps Engineer will be responsible for the design, development, and maintenance of the company’s CI/CD pipelines, managing cloud infrastructure, automating deployment processes, monitoring systems, and ensuring security protocols are meticulously followed. They will collaborate closely with software development and IT operations teams to streamline operations and improve overall system reliability and performance. Qualifications Experience with CI/CD tools and processes, such as Jenkins, GitLab, or Travis CI Strong knowledge of cloud services and infrastructure, with a focus on AWS, Azure, or Google Cloud Platform Proficiency in scripting languages, such as Python, Bash, or PowerShell Experience with configuration management and infrastructure-as-code tools like Ansible, Terraform, or Chef Strong understanding of monitoring and logging tools, such as Prometheus, Grafana, or ELK Stack Excellent problem-solving skills and the ability to troubleshoot complex issues Strong communication skills and the ability to work collaboratively in a team environment Bachelor's degree in Computer Science, Engineering, or a related field Experience in the financial technology sector is a plus

Posted 1 day ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier. Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS, Parameter Store, and Secrets Manager. Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS), implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2, and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production.Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS, Terraform, CI/CD tools, and Kubernetes (EKS). Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Benefits Perks and benefits: Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity

Posted 1 day ago

Apply

2.0 - 3.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position: Staff - AI Engineer Job Summary: As an AI Engineer, you will be responsible for designing, developing, and implementing AI models and algorithms that solve complex problems and enhance our products and services. You will work closely with software engineers, business users and product managers to create intelligent systems that leverage machine learning and artificial intelligence. Responsibilities: Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Proficiency in Generative AI: Strong understanding of GPT architecture, prompt engineering, embeddings , efficient token usage and agentic AI solutions (example async, batching, caching, piping solutions etc ). Hands-on Experience with Model Training: Demonstrated ability to train and fine-tune AI/ML models, particularly in natural language processing (NLP). Expertise in Deep Learning Frameworks: Familiarity with popular deep learning libraries such as TensorFlow, PyTorch, or similar tools. NLP Techniques: Experience with various NLP techniques, namely using AI to extract the contents from unstructured complex PDF documents. Knowledge of deep learning techniques and neural networks. Strong communication skills to convey complex technical concepts to non-technical stakeholders. Skills requirement: 2-3 years of hands on experience developing AI solutions. Strong programming skills in languages such as Python. Strong experience with SQL, RESTful API, JSON Experience with Azure Cloud resources is preferable. Familiarity with DevOps practices and tools. Exposure to any noSQL Databases (MongoDB, Cosmos DB and etc) is a plus EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Job Title: Data Integration Engineer – IICS & PowerCenter Location: Any 4+ years of hands-on experience with Informatica IICS and PowerCenter. Strong proficiency in SQL and data transformation logic. Experience integrating data across cloud platforms (AWS or Azure) and on-premise systems. Familiarity with data warehousing concepts and relational database technologies. Good understanding of scheduling tools and job orchestration. Exposure to DevOps practices such as CI/CD and automation is a plus.

Posted 1 day ago

Apply

0.0 - 4.0 years

0 - 0 Lacs

Gangtok, Sikkim

On-site

About Medhavi Skills University Medhavi Skills University (MSU) is a government-notified private skills university established under a State Act in Sikkim, dedicated to promoting quality skill education and entrepreneurship integrated with higher education. As a pioneering institution in the convergence of the skilling ecosystem with higher education, MSU aligns with the National Education Policy, 2020 (NEP 2020). Recognized by the UGC and established in 2021, MSU collaborates with industries and Skill Development Institutes to offer work-integrated courses, embedding on-the-job internships and training within the curriculum.MSU is a recognized Awarding Body under the National Council for Vocational Education &Training (NCVET) and is empanelled with the Directorate General of Training (DGT). As an anchor university partner with the National Skill Development Corporation (NSDC) and the Project Management Unit (PMU) of NSDC International, MSU is committed to preparing youth for the future workspace by co- working with industry partners to design and implement demand-driven programs. For more information, visit https://msu.edu.in Role Overview: We are seeking dynamic and experienced professionals to join our CSE Department as Faculty Members. The ideal candidate will possess a strong academic background and industry exposure incomputer applications and emerging technologies. You will be responsible for teaching, mentoring,curriculum development, and research in alignment with industry and academic standards. Key Responsibilities :Deliver engaging and effective lectures, lab sessions, and tutorials across BCA/MCA curriculum areas such as Programming, Data Structures, Algorithms, Databases, Web Technologies, AI, ML, and Software Engineering. Mentor students on academic projects, internships, research papers, and career development. Develop curriculum and content as per UGC/AICTE guidelines and industry requirements. Evaluate student performance through assessments, assignments, and examinations. Participate in academic planning, quality assurance processes, and departmental activities. Guide students in the development of real-time software applications and research innovations. Stay updated with the latest technologies, tools, and pedagogical strategies. Engage in scholarly activities including research publications, seminars, and conferences. Qualifications & Skills Required :Master’s Degree in Computer Applications (MCA) / MSc (CS/IT) or equivalent . Minimum 2–4 years of teaching or relevant industry experience. Fresh postgraduates with exceptional skills and passion for teaching may be considered. Proficiency in programming languages (C, C++, Java, Python, etc.) Knowledge of frameworks and tools (Spring, .NET, Node.js, Django) Hands-on with databases (MySQL, MongoDB, Oracle) Exposure to Cloud Platforms, DevOps tools, AI/ML, Cybersecurity, and Fullstack Development. Strong communication, mentoring, and classroom management skills. Ability to integrate real-world projects and case studies into teaching. Desirable : UGC-NET/SET qualified Participation in MOOCs/NPTEL/FDPs related to Computer Applications Experience in curriculum design and accreditation processes (NBA/NAAC) What We Offer Being a key player in something potentially massive and world-changing Competitive salary and incentive structure, best in the industry. Opportunities for professional development and growth. A supportive and collaborative work environment. The chance to make a meaningful impact on the careers and lives of working professionals. How to ApplyInterested candidates should submit a resume and cover letter detailing their qualifications andexperience at careers@msu.edu.in at earliest possible.Medhavi Skills University is an equal opportunity employer. We celebrate diversity and are committedto creating an inclusive environment for all employees. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹25,000.00 per month Work Location: In person

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About DataNimbus At DataNimbus, we are on a mission to redefine how organizations leverage Data and AI to drive growth, innovation, and efficiency. Our pioneering products, such as DataNimbus Designer (a cloud-native ETL designer), datanimbus.io (a comprehensive data and integration platform), FinHub.ai (payment modernization platform) empower businesses to simplify complex workflows, adopt cutting-edge technology, and achieve sustainable scalability. With headquarters in the U.S. and offices in India and Canada, DataNimbus operates globally, fostering a culture of responsible innovation, adaptability, and customer-centricity . We pride ourselves on being a trusted partner for customers navigating the complexities of Data+AI and payment modernization. Why Join DataNimbus? At DataNimbus, we believe in shaping a sustainable, AI-driven future while offering an environment that prioritizes learning, innovation, and growth . Our core values—Customer-Centricity, Simplicity, Curiosity, Responsibility, and Adaptability—are the foundation of our workplace, ensuring every team member can make a meaningful impact. Joining DataNimbus means being part of a dynamic team where you can: Work with cutting-edge technologies and revolutionize workflows in Data+AI solutions. Contribute to solutions that are trusted by global businesses for their scalability, security, and efficiency. Grow personally and professionally in a culture that values curiosity and continuous learning. If you're passionate about innovation, ready to solve complex challenges with simplicity, and eager to make a difference, DataNimbus is the place for you. Key Responsibilities: Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to’s and productionalizing customer use cases. Work with engagement managers to scope variety of professional services work with input from the customer. Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications. Consult on architecture and design, bootstrap or implement customer projects which leads to a customers’ successful understanding, evaluation and adoption of Databricks. Support customer operational issues with an escalated level of support. Ensure that the technical components of the engagement are delivered to meet customer’s needs by working with the Project Manager, Architect, and Customer teams. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions. Mentor and provide guidance to junior data engineers and team members. Required Qualifications: 5+ years experience in data engineering, data architecture, data platforms & analytics. At least 4+ years experience with Databricks, PySpark, Python, and SQL. Consulting / customer facing experience, working with external clients across a variety of industry markets. Comfortable writing code in both Python and SQL. Proficiency in SQL and experience with data warehousing solutions. Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one. Strong understanding of data modeling, ETL processes, and data architecture principles. Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals. Familiarity with CI/CD for production deployments – GitHub, Azure DevOps, Azure Pipelines. Working knowledge of MLOps methodologies. Design and deployment of performant end-to-end data architectures. Experience with technical project delivery – managing scope and timelines. Documentation and white-boarding skills. Experience working with clients and managing conflicts. Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects. Good to have Databricks Certifications. Strong communication and collaboration skills. Excellent problem-solving skills. Interested? Send in your CV to careers@datanimbus.com ASAP!

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Key Responsibilities: Design, develop, and maintain the Web applications using Python and React.JS. Integrate the front-end with backend services using Python and Fast API Gathering functional requirements, developing technical specifications and project & test planning Designing/developing software prototypes, or proof of concepts Act in a technical leadership capacity: applying technical expertise to challenging programming and design problems. Coordinate closely with ML Engineers to integrate machine learning models and ensureseamless functionality Perform DevOps role in managing build to operate lifecycle of the solutions that we develop. Contribute to the design and architecture of the project. Qualifications: Design, develop and maintain the server-side of web applications. Experience in Python development, with a strong understanding of Python web frameworks such as Django or Flask. Solid understanding of machine learning algorithms, techniques, and libraries, such as TensorFlow, PyTorch, scikit-learn, or Keras. Have working knowledge of AWS services like S3, Lambda. Experience with cloud platforms and services, such as AWS, Azure, or Google Cloud Platform, including cloud-native development, deployment, and monitoring. Experience with relational and NoSQL databases, such as MySQL, PostgreSQL, MongoDB, or Cassandra. Familiarity with DevOps practices and tools, such as Docker, Kubernetes, Jenkins, Git, and CI/CD pipelines

Posted 1 day ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Key Responsibilities • Design, deliver and maintain the appropriate data solution to provide the correct data for analytical development to address key issues within the organization. • Collaborate with a cross-functional team to deliver high quality results (such as Tableau developers and analysts). • Develop and operate modern data architecture approaches to create data assets for the organization, putting data quality as the focus of all data solutions. Qualifications & Experience • Bachelor’s Degree or higher in Computer Science, Statistics, Business, Information Technology, or related field (Master’s degree preferred) • Experience 10+ in delivering and supporting data engineering capabilities and building enterprise data solutions and data assets • Experience with cloud services within Azure, AWS, or GCP platforms (preferably Azure) • Experience with analytical tools (preferably SQL, dbt, Snowflake, BigQuery, Tableau) • Experience with design and software programming development (preferably Python, Javascript and/or R) preferred • Experience with DevOps and CI/CD processes is a plus Must be having experience into python development and SQL foundation.

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

https://goodspace.ai/jobs/Full-Stack-Developer?id=28889&applySource=LinkedIn_Jobs&source=campaign_LinkedIn_Jobs-Kritika_fullstackdeveloper-28889Key SkillsJavascript,Node.js,Python, React.js SqlJob Description Job description: Key Responsibilities Frontend Development • Build and maintain responsive SPAs using React.js and Next.js. • Collaborate with design and product teams for component libraries. Backend Development • Develop RESTful APIs using Node.js with secure, scalable microservices. • Implement real-time communication (Socket.io, Kafka). • Manage cloud deployment (AWS EC2, S3, ELB). Mobile App Development • Build cross-platform applications using Flutter (iOS/Android). • Integrate camera-based QR code scanning and OCR for visitor cards. Deployment & DevOps • Configure CI/CD pipelines using Jenkins or AWS CodePipeline. • Containerize services with Docker; deploy using Kubernetes Required Skills & Qualifications • Must have strong command over JavaScript, React.js/Next.js, Node.js. • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. • Proven experienced or Familiar in microservices architecture and cloud deployments (AWS). • Must have Experienced or Familiar with Kafka, Docker, Kubernetes, and CI/CD pipelines. • Familiarity with PostgreSQL, MongoDB, and Firebase Realtime DB. • Good understanding of secure coding practices, encryption, SSL/TLS,. • Must have great attitude towards learning new skills and upskilling the existing skills. • Must have strong communication skills and have worked with cross platform team. • Experienced in SaaS products and admin dashboards • Knowledge of Figma UI/UX designs • Familiarity with Agile/Scrum workflows.

Posted 1 day ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Company is growing solution architecture team, and looking for an experienced Solution architect to join team.  Produces (or ensures production of) all Architectural deliverables under our development methodology.  Identify all areas of risk, including Architectural, Technical and Security, and to manage with the cooperation of the appropriate Stakeholders, defining appropriate work to address these or escalation as necessary  Ensures that the Architecture, any Divergences and Risks are both clearly communicated and agreed by all Stakeholders  Provides Technical Leadership to other solution architects.  Drives alignment of Company landscape with the enterprise target picture, including system retirement.  Provides Design Assurance for Solution Architecture to ensure they are implemented as designed, and to the correct standards and quality Skills & Experience:  Experience defining, documenting, and architecting solutions across a hybrid and on-premise cloud estate, potentially requiring integration with legacy technologies and products for large projects/programmes  Experience of providing architectural support and assurance to DevOps team and an application of Agile practices would be advantageous.  Experience with Technical Selection and RFI/RFP processes and undertaking supplier assurance of third party products.  Experience leading architectural design authorities for projects and working within Enterprise Architectural Governance and Enterprise Portfolio Management frameworks.  Stakeholder management and the ability to influence at varying levels(e.g. C-Suite, Developers, Testers, Project Managers, etc)  Ability to convey Architecture decisions through the use of a variety of communication techniques to both technical and non-technical stakeholders.  Broad knowledge of IT technologies and architectural styles/patterns (e.g. Event Driven, Serverless, Microservices Architectures as well as SOA, layered).  Applied Knowledge of the Open Agile Framework (Open.org) would be desirable but not necessary.  Knowledge of Finance and Accounting business processes.  Experience in SAP ERP implementations (S/4 HANA preferable) and Oracle E-business suite as an advantage.  Experience in leading Finance Solution Architecture function in complex large-scale Finance Transformation programmes

Posted 1 day ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job description 🚀 Job Title: AI Engineer Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 2-6 Years Level: Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer, you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks: Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio: Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG: FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs: OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment: Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases: MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging: Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation: Competitive fixed salary + equity + performance-based bonuses Impact: Ownership of key AI modules powering thousands of live enterprise conversations Learning: Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture: High-trust, outcome-first environment that celebrates execution and learning Mentorship: Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale: Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder, architect, and visionary—who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform—from India, for the world.

Posted 1 day ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job description 🚀 Job Title: AI Engineer Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 2-6 Years Level: Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer, you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks: Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio: Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG: FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs: OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment: Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases: MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging: Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation: Competitive fixed salary + equity + performance-based bonuses Impact: Ownership of key AI modules powering thousands of live enterprise conversations Learning: Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture: High-trust, outcome-first environment that celebrates execution and learning Mentorship: Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale: Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder, architect, and visionary—who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform—from India, for the world.

Posted 1 day ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: VMware/Windows/Cloud (AWS/Azure) Administrator Hands on Experience in ESXi/VC installation and configuration. Experience in troubleshooting issues on ESXI’s etc Experience in troubleshooting issues on vCenter Appliances Experience in Datacenter handling monitoring the health check of server/UCS/VC Experience on either Shell/power shell/python Expert Knowledge on VMware ESXi patching/Upgrade, VUM/LCM, baselines, host profiles management Ability to setup of resource pools and a VMware Distributed Resources Scheduler (DRS) cluster, Resource Management, Resource Monitoring along with VMware High Availability (HA) cluster Proven experience in ESX Server Installation and configuration of virtual switches, network connections and port groups configure Storage Validated understanding to support Level 3 OS monitoring Client Software, Backup and Recovery client, Client Software, automated Security, health checking client, OS monitoring Windows 2003/2008/2012/2016/2019/2022 Servers administration Advanced OS troubleshooting skills Server builds using images Strong knowledge on install, configure and manage Microsoft Windows OS, VMware Strong Knowledge on server management tools H/W troubleshooting, Firmware & BIOS upgrade Strong Knowledge of Troubleshooting skills on Windows server (Blue Screen Errors, performance issues, permissions issues) Knowledge and troubleshooting on Vrealize automation Cloud (Azure) Management of Windows servers in Azure cloud environments. Provision and manage infrastructure resources in the cloud using tools like Azure Resource Manager templates or terraform Investigate and resolve issues related to the application, infrastructure, or deployment pipelines Design, implement, and manage CI/CD pipelines using Azure DevOps. Automate infrastructure provisioning and deployments using Infrastructure as Code (IaaC) tools like Terraform, ARM templates, or Azure CLI. Monitor and optimize Azure environments to ensure high availability, performance, and security Troubleshoot and resolve issues related to build, deployment, and infrastructure Support all Cloud Resources in Azure; Deploy and manage Azure virtual machines, provisioning Vnets, VM’s and load balancers Configure various Azure services such as virtual networks, storage accounts, key vault, load balancer, cluster etc. and troubleshoot it Troubleshoots issue, performing root cause analysis, and implements corrective/preventive actions Hands on experience in migration from On-Premises to Cloud in Azure Implementation and troubleshooting experience AZURE are required, including security configurations, patching Documentation, & Administration Experience (Azure Certifications Appreciated) Assist with infrastructure migration strategies such as large-scale application transfers to the cloud File Services: DFS configurations Knowledge and troubleshooting on DFSR configurations. Experience in bulk Data migration NTFS permissions administration SMB Share configuration and permissions Windows File share configuration, troubleshooting performance issues Knowledge on Active Directory.

Posted 1 day ago

Apply

0.0 - 6.0 years

35 - 42 Lacs

Kochi, Kerala

On-site

Responsibilties Hands-on maintenance of PostgreSQL databases, ensuring uptime, reliability, and performance. Design and implement procedures for backup, failover, and recovery to deliver data integrity and business continuity. Provide hands-on database expertise to development teams, advising on schema design, indexing, and query optimization. Proactively monitor database health and performance; identify and resolve issues before they impact production. Collaborate with internal teams and service providers (AWS, Enterprise DB etc.) in the resolution of issues Work closely with DevOps and Engineering to integrate safe database change processes into delivery pipelines Establish and document database standards, policies, and best practices. Contribute to the broader data architecture strategy as the organization scales and evolves. Recommend and implement best practices for data security, compliance, and scalability. Define, agree and maintain improvement roadmap for the database estate. Proven experience required 6+ years of hands-on experience working with PostgreSQL in complex, production environments. Demonstrable expertise in hands-on operations, managing the ongoing hygiene of a postgreSQL estate including backup, point-in-time recovery, replication and failover – ideally with Barman and a PG load balancer. Deep technical knowledge of PostgreSQL internals with experience in query optimization, indexing strategies, and performance tuning of DB instance and host parameters. Experience working with cloud-based and/or containerized infrastructure Proficient in scripting (e.g., Bash, Python) to automate database operations and maintenance tasks. Solid understanding of Linux system administration as it relates to database performance and configuration. Demonstrates the drive and ability to independently identify required work, negotiate priorities and efficiently deliver on agreed objectives. Strong communication skills, with the ability to explain database concepts and trade-offs to both technical and non technical stakeholders. Desirable: Exposure to other database technologies (e.g., MySQL, MongoDB, Redis). Experience with observability and monitoring tools (e.g., Prometheus, Grafana, pg_stat_statements). Experience with infrastructure-as-code techniques (e.g. Terraform, Ansilble) Working Conditions Office based: 5 days in Kochi office and shift time will be from 11 am to 8pm IST Job Types: Full-time, Permanent Pay: ₹3,500,000.00 - ₹4,200,000.00 per year Benefits: Commuter assistance Health insurance Paid sick time Provident Fund Ability to commute/relocate: Ernakulam, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: Database administration: 6 years (Required) PostgreSQL DBA: 6 years (Required) Work Location: In person Expected Start Date: 30/08/2025

Posted 1 day ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Title:- Microsoft Dynamic 365 Technical Consultant/Developer Work Location:- LTIM Pan India Experience Required:- 5+ Years Notice Period:- Immediate Joiner Job Description:- MS Dynamic 365 Technical Consultant/Developer • The Dynamic 365 Technical Consultant/Developer needs to be well versed with Dynamics 365 Customer Service, Customer Service Hub, Knowledge Management, Self Service Portal, Reports & Dashboards using Power BI/SSRS, Power Automate and Power Apps. • The selected candidate will work as part of the development team, and report to the project manager and technical lead. Essential Job Functions & Required Skills:- • Proficient with developing, deploying, customizing and integrating Dynamics 365 Customer Service. • Proficiency in designing, developing and implementing business processes, plugins and workflows. • Experience in designing and creating custom entities and relationships between those entities. • Experience in handling multiple business unit with data segregation and cross BU access/roles. • Strong hand on experience in developing Dynamics Self-service Portals and Custom portal integrating multiple Dynamics instance for Case creation, Case status update, Add Comment, Upload Screenshots and knowledge Management integrating SharePoint. • Experience of software development using Microsoft .Net, C#, JSON and jQuery. • Experience in Customer Service Hub, Unified Client Interface, Client API form context and Model-driven apps. • Strong Hand on experience in Scheduled Reports, Custom SSRS reports with multiple data sources (external & internal). • Experience in developing automations using Power Automate and Power Apps. • Experience in developing application which can be published to Azure. • Experience in handling huge volumes of case and email records. • Experience in using third party tools to configure, customize, monitor and troubleshoot the Dynamics application. • Experience in Agile/Scrum based process implementation using tools like Azure DevOps with pipeline automation for CI/CD. • Experience in designing and automating the survey forms using MS Forms Pro. • Dynamics 365 Customer Service Insights, AI Powered Dashboards, AI Predictive analytics and other AI-driven insights is a plus. • Hands-on experience in developing Power BI Dashboards & Reports is a plus. • Provide troubleshooting/technical support. • Actively participate in design and analysis sessions to ensure sound team decision-making. • Support system and user acceptance testing activities, including issue resolution. • Complete technical documentation to ensure system is fully documented.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About the Company They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About the Client Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title : DevOps engineer Job Locations : Mumbai Experience : 6+ Years Required Skills: AWS+ Kubernetes Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contractual Notice Period : Immediate - 10 Days Job description: DevOps engineer Essential Skills/Experience Experience installing, managing, and maintaining on-premises Kubernetes clusters running in VMs Experience with AWS and with Kubernetes clusters deployed using EC2 instances in AWS Experience with Kubernetes security and compliance Experience monitoring and troubleshoot Kubernetes cluster performance and application issues Strong knowledge of Kubernetes networking concepts, service meshes (Istio, Linkerd etc.), and ingress controllers. Proficiency in Docker and container orchestration. Proficiency in the Customize template free definition language

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies