Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
0 Lacs
delhi
On-site
As a Snowflake Solution Architect, you will be responsible for owning and driving the development of Snowflake solutions and products as part of the COE. Your role will involve working with and guiding the team to build solutions using the latest innovations and features launched by Snowflake. Additionally, you will conduct sessions on the latest and upcoming launches of the Snowflake ecosystem and liaise with Snowflake Product and Engineering to stay ahead of new features, innovations, and updates. You will be expected to publish articles and architectures that can solve business problems for businesses. Furthermore, you will work on accelerators to demonstrate how Snowflake solutions and tools integrate and compare with other platforms such as AWS, Azure Fabric, and Databricks. In this role, you will lead the post-sales technical strategy and execution for high-priority Snowflake use cases across strategic customer accounts. You will also be responsible for triaging and resolving advanced, long-running customer issues while ensuring timely and clear communication. Developing and maintaining robust internal documentation, knowledge bases, and training materials to scale support efficiency will also be a part of your responsibilities. Additionally, you will support with enterprise-scale RFPs focused around Snowflake. To be successful in this role, you should have at least 8 years of industry experience, including a minimum of 3 years in a Snowflake consulting environment. You should possess experience in implementing and operating Snowflake-centric solutions and proficiency in implementing data security measures, access controls, and design specifically within the Snowflake platform. An understanding of the complete data analytics stack and workflow, from ETL to data platform design to BI and analytics tools is essential. Strong skills in databases, data warehouses, data processing, as well as extensive hands-on expertise with SQL and SQL analytics are required. Familiarity with data science concepts and Python is a strong advantage. Knowledge of Snowflake components such as Snowpipe, Query Parsing and Optimization, Snowpark, Snowflake ML, Authorization and Access control management, Metadata Management, Infrastructure Management & Auto-scaling, Snowflake Marketplace for datasets and applications, as well as DevOps & Orchestration tools like Airflow, dbt, and Jenkins is necessary. Possessing Snowflake certifications would be a good-to-have qualification. Strong communication and presentation skills are essential in this role as you will be required to engage with both technical and executive audiences. Moreover, you should be skilled in working collaboratively across engineering, product, and customer success teams. This position is open in all Xebia office locations including Pune, Bangalore, Gurugram, Hyderabad, Chennai, Bhopal, and Jaipur. If you meet the above requirements and are excited about this opportunity, please share your details here: [Apply Now](https://forms.office.com/e/LNuc2P3RAf),
Posted 6 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description 66degrees is seeking a Senior Consultant with specialized expertise in AWS, Resources will lead and scale cloud infrastructure, ensuring high availability, automation, and security across AWS, GCP and Kubernetes environments. You will be responsible for designing and maintaining highly scalable, resilient, and cost- optimized infrastructure while implementing best-in-class DevOps practices, CI/CD pipelines, and observability solutions. As a key part of our clients platform engineering team, you will collaborate closely with developers, SREs, and security teams to automate workflows, optimize cloud performance, and build the backbone of their microservices candidates should have the ability to overlap with US working hours, be open to occasional weekend work and be local to offices in either Noida, or Gurgaon, India as this is an in-office opportunity. Qualifications 7+ years of hands-on DevOps experience with proven expertise in AWS; involvement in SRE or Platform Engineering roles is desirable. Experience handling high-throughput workloads with occasional spikes. Prior industry experience with live sports and media streaming. Deep knowledge of Kubernetes architecture, managing workloads, networking, RBAC and autoscaling is required. Expertise in AWS Platform with hands-on VCP, IAM, EC2, Lambda, RDS, EKS and S3 experience is required; the ability to learn GCP with GKE is desired. Experience with Terraform for automated cloud provisioning; Helm is desired. Experience with FinOps principles for cost-optimization in cloud environments is required. Hands-on experience building highly automated CI/CD pipelines using Jenkins, ArgoCD, and GitHub Actions. Hands-on experience with service mesh technologies (Istio, Linkerd, Consul) is required. Knowledge of monitoring tools such as CloudWatch, Google Logging, and distributed tracing tools like Jaeger; experience with Prometheus and Grafana is desirable. Proficiency in Python and/or Go for automation, infrastructure tooling, and performance tuning is highly desirable. Strong knowledge of DNS, routing, load balancing, VPN, firewalls, WAF, TLS, and IAM. Experience managing MongoDB, Kafka or Pulsar for large-scale data processing is desirable. Proven ability to troubleshoot production issues, optimize system performance, and prevent downtime. Knowledge of multi-region disaster recovery and high-availability architectures. Desired Contributions to open-source DevOps projects or strong technical blogging presence. Experience with KEDA-based autoscaling in Kubernetes. (ref:hirist.tech)
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
punjab
On-site
You have an exciting opportunity to join as a DevSecOps in Sydney. As a DevSecOps, you should have 3+ years of extensive Python proficiency and 3+ years of Java Experience. Your role will also require extensive exposure to technologies such as Javascript, Jenkins, Code Pipeline, CodeBuild, and AWS" ecosystem including AWS Well Architected Framework, Trusted Advisor, GuardDuty, SCP, SSM, IAM, and WAF. It is essential for you to have a deep understanding of automation, quality engineering, architectural methodologies, principles, and solution design. Hands-on experience with Infrastructure-As-Code tools like CloudFormation and CDK will be preferred for automating deployments in AWS. Moreover, familiarity with operational observability, including log aggregation, application performance monitoring, deploying auto-scaling and load-balanced / Highly Available applications, and managing certificates (client-server, mutual TLS, etc) is crucial for this role. Your responsibilities will include improving the automation of security controls, working closely with the consumer showback team on defining processes and system requirements, and designing and implementing updates to the showback platform. You will collaborate with STO/account owners to uplift the security posture of consumer accounts, work with the Onboarding team to ensure security standards and policies are correctly set up, and implement enterprise minimum security requirements from the Cloud Security LRP, including Data Masking, Encryption monitoring, Perimeter protections, Ingress / Egress uplift, and Integration of SailPoint for SSO Management. If you have any questions or need further clarification, feel free to ask.,
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities Indicative years of experience: 4-6 years (At-least 2 years of strong AWS hands-on experience) Role Description We are looking for a Senior Software Engineer who can work closely with the team to develop on-prim/cloud solutions using Typescript, Java and other scripting language. The person should be having good exposure to AWS managed service and can pair with Leads for developing cloud-based solutions for customers. Roles also required a good understanding of Extreme engineering practices like TDD, Unit test coverage, Pai-programming, clean code practices etc. Reporting Relationship This role will report to Delivery Manager / Senior Delivery Manager. Key Responsibilities Work independently in developing solutions at AWS and On-prim environment. Work closely with Tech leads for building strong design and engineering practices in the team. Effectively Pair with team members and Tech leads for building or maintaining a strong code Quality framework. Work closely with Scrum master for implementing Agile best practices in the team. Work closely with Product owners for defining the user stories. Work independently on production incidents reported by business partners to provide resolution within defined SLAs, coordinate with other teams as needed. Act as an interface between the business and technical teams and communicate effectively. Document problem resolutions and new learning for future use, update SOPs Monitor system availability and communicate system outages to business and technical teams. Provide support to resolve complex system problems, triage system issues beyond resolution to appropriate technical teams. Assist in analyzing, maintaining, implementing, testing and documenting system changes and fixes. Provide training to new team members and other teams on business processes and applications. Manage the overall software development workflow. Provide permanent resolutions for repeating issues. Build automation for repetitive tasks. Qualifications Skills required: Good exposure on Type script , AWS Cloud Development Core Java, Java 8 frameworks, Java scripting, Expertise on spring boot and Spring MVC. Experience on AWS DB's ecosystem , RDBMS or NoSQL Databases, Good exposure to SQLs. Good Exposure to Extreme engineering practices like TDD, Unit test coverage, Clean code practices, Pai-programming, mobbing, Incremental value delivery etc. Understanding and exposure to microservice architecture. Domain Driven Desging and Federeation exposure would be an addtion. Good Hands-on Experience with the core AWS services (EC2, IAM, ECS, Cloud Formation, VPC, Security Groups, Nat Instance, Autoscaling Lamda, SNS/SQS, S3, Event Driven services etc). Strong notions of security best practices (e.g. using IAM Roles, KMS, etc.). Experience with monitoring solutions such as CloudWatch, Prometheus, and the ELK stack. Experience with building or maintaining cloud-native applications. Past experience with the serverless approaches using AWS Lambda is a plus.
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities Indicative years of experience: 2-4 years (At-least 1 year of strong AWS hands-on experience) Role Description We are looking for a Software Engineer who can work closely with the team to develop on-prim/cloud solutions using Typescript, Java and other scripting language. The person should be having good exposure to AWS managed service and can pair with Leads for developing cloud-based solutions for customers. Roles also required a exposure to Extreme engineering practices like TDD, Unit test coverage, Pai-programming, clean code practices etc. Reporting Relationship This role will report to Delivery Manager / Senior Delivery Manager. Key Responsibilities Work independently to medium & complex development stories on AWS and On-prim environment. Work closely with Sr. Software engineers while doing development and effectively using engineering practices as a part of the development. Effectively Pair with team members and Sr. Engineers in the team for building or maintaining a strong code Quality framework. Work closely with Scrum master for implementing Agile best practices in the team. Work closely with Product owners while participating in groomin and developing user stories. Work independently on production incidents reported by business partners to provide resolution within defined SLAs, coordinate with other teams as needed. Document problem resolutions and new learning for future use, update SOPs Monitor system availability and communicate system outages to business and technical teams. Provide support to resolve complex system problems, triage system issues beyond resolution to appropriate technical teams. Assist in analyzing, maintaining, implementing, testing and documenting system changes and fixes. Provide training to new team members and other teams on business processes and applications. Manage the overall software development workflow. Provide permanent resolutions for repeating issues. Build automation for repetitive tasks. Qualifications Skills required: Good exposure on Type script , AWS Cloud Development Core Java, Java 8 frameworks, Java scripting, Expertise on spring boot and Spring MVC. Experience on AWS DB's ecosystem , RDBMS or NoSQL Databases, Good exposure to SQLs. Good Exposure to Extreme engineering practices like TDD, Unit test coverage, Clean code practices, Pai-programming, mobbing, Incremental value delivery etc. Understanding and exposure to microservice architecture. Domain Driven Desging and Federeation exposure would be an addtion. Good Hands-on Experience with the core AWS services (EC2, IAM, ECS, Cloud Formation, VPC, Security Groups, Nat Instance, Autoscaling Lamda, SNS/SQS, S3, Event Driven services etc). Understand and follow the security best practices (e.g. using IAM Roles, KMS, etc.). Experience with monitoring solutions such as CloudWatch, Prometheus, and the ELK stack. Past experience with the serverless approaches using AWS Lambda is a plus.
Posted 1 week ago
0 years
0 Lacs
Hyderābād
On-site
Our Company At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers—and our customers’ customers—to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise. Build the Future of Data, Analytics, and AI with Teradata Are you driven by the challenge of shaping the future of database technologies, analytics, and AI—whether in the cloud, on-premises, or hybrid environments? At Teradata, we invite you to be part of a visionary team that’s redefining the landscape of intelligent data platforms. We are actively hiring talented engineers who are passionate about building world-class products and solving complex data and analytics challenges. Join our India Development and Excellence Centre and contribute to innovations that empower global enterprises. Analytics and AI Team Our Analytics and AI team leads the charge in delivering next-generation capabilities within Teradata’s unified data platform. We’re pioneering technologies such as: Enterprise-grade databases for RAG-based AI applications and intelligent agents In-database analytics functions and BYOM (Bring Your Own Model) deployment SQL-based generative AI and Agentic AI systems that combine reasoning, retrieval, and orchestration at scale We are building intelligent, production-ready analytics that allow users to move effortlessly between traditional data processing and advanced AI workflows—all within the trusted, governed environment of Teradata VantageCloud . At Teradata, we don’t just manage data—we unlock its full potential through the power of AI and ML. As a key contributor, you’ll help architect, build, and deploy transformative software solutions that are central to our strategic vision and global impact. Data Platform & Query Optimization Team Our Core Database Engineering team is focused on developing cloud-native database features that support distributed query processing, autoscaling, and high availability. We’re building systems that are: Highly performant, secure, and reliable for large-scale analytical workloads Powered by elastic compute clusters for high concurrency and scalability Designed with advanced storage and retrieval mechanisms across diverse data sources Enterprise-grade databases for RAG-based AI applications and intelligent agents The Query Optimizer Team is the brain behind efficient data access. From cost-based plan selection and join reordering to adaptive execution and cardinality estimation, we fine-tune every aspect of query planning to ensure optimal performance at scale. If you’re passionate about performance, this is where the magic happens. AI-Powered Test Automation As we deliver AI to our customers, we also harness its power internally to enhance our development lifecycle. Our AI-based test automation initiatives focus on: Building autonomous and agentic systems that analyze billions of test events Identifying root causes of failures and accelerating release decisions Experimenting with LLMs, vector stores, and multi-agent orchestration frameworks to evolve testing from passive verification to proactive, self-healing quality assurance Why Teradata? At Teradata, you’ll work with some of the brightest minds in the industry, tackle meaningful challenges, and help shape the future of data and AI. Whether you're passionate about analytics, cloud engineering, AI systems, or performance optimization— there’s a place for you here . #LI-NM1
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: AWS Cloud Engineer Location: Noida The ideal candidate must be self-motivated with a proven track record as a Cloud Engineer (AWS) that can help in implementation, adoption, and day-to-day support of an AWS cloud Infrastructure environment distributed among multiple regions and Business Units. The individual in this role must be a technical expert on AWS who understands and practice the AWS Well Architected Framework and is familiar with a multi-account strategy deployment using Control Tower/Landing Zone setup. The ideal candidate can manage day-to-day operations, troubleshoot problems, provide routine maintenance and can enhance system health monitoring on the cloud stack. Technical Skills Strong experience on AWS IaaS architectures Hands-on experience in deploying and supporting AWS services such as EC2, autoscaling, AMI management, snapshots, ELB, S3, Route 53, VPC, RDS, SES, SNS, Cloud Formation, Cloud Watch, IAM, Security Groups, CloudTrail, Lambda etc. Experience on building and supporting AWS Workspaces Experience in deploying and troubleshooting either Windows or Linux Operating systems. Experience with AWS SSO and RBAC Understanding of DevOps tools such Terraform, GitHub, and Jenkins. Experience working on ITSM processes and tools such as Remedy, ServiceNow. Ability to operate at all levels within the organization and cross functionally within multiple Client organizations Responsibilities Responsibilities include planning, automation, implementations and maintenance of the AWS platform and its associated services Provide SME / L2 and above level technical support Carry out deployment and migration activities Must be able to mentor and provide technical guidance to L1 engineers Monitoring of AWS infrastructure and perform routine maintenance, operational tasks Work on ITSM tickets and ensure adherence to support SLAs Work on change management processes Excellent analytical and problem-solving skills. Exhibits excellent service to others Qualifications At least 2 to 3 years of relevant experience on AWS Overall, 3-5 years of IT experience working for a global Organization Bachelor’s Degree or higher in Information Systems, Computer Science, or equivalent experience. Certified AWS Cloud Practitioner will be preferred. Location: Noida - UI, Noida, Uttar Pradesh, India
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Team Tarana Wireless India's QA team plays a crucial role in ensuring the performance, reliability, and quality of its cloud based distributed system architecture products. With a strong focus on cloud automation, usage of sophisticated tools, and detailed validation, the team works across cutting-edge technologies. Their work empowers product stability and innovation through advanced test automation, close collaboration with engineering, and a culture of continuous improvement. Job Summary We are looking for a passionate and skilled Cloud Performance QA Engineer to evaluate the scalability, responsiveness, and resilience of our large-scale distributed system — the Tarana Cloud Suite. This includes validating cloud microservices, databases, and real-time communication with intelligent radio devices. As a key member of the QA team, you will be responsible for performance, load, stress, and soak testing, along with conducting chaos testing and fault injection to ensure system robustness under real-world and failure conditions. You'll simulate production-like environments, analyze bottlenecks, and collaborate closely with development, DevOps, and SRE teams to proactively identify and address performance issues. This role requires a deep understanding of system internals, cloud infrastructure (AWS), and modern observability tools. Your work will directly influence the quality, reliability, and scalability of our next-gen wireless platform. Key Responsibilities Understand the Tarana Cloud Suite architecture — microservices, UI, data/control flows, databases, and AWS-hosted runtime. Design and implement robust load, performance, scalability, and soak tests using Locust, JMeter, or similar tools. Set up and manage scalable test environments on AWS to mimic production loads. Build and maintain performance dashboards using Grafana, Prometheus, or other observability tools. Analyze performance test results and infrastructure metrics to identify bottlenecks and optimization opportunities. Integrate performance testing into CI/CD pipelines for automated baselining and regression detection. Collaborate with cross-functional teams to define SLAs, set performance benchmarks, and resolve performance-related issues. Conduct resilience and chaos testing using fault injection tools to validate system behavior under stress and failures. Debug and root-cause performance degradations using logs, APM tools, and resource profiling. Tune infrastructure parameters (e.g., autoscaling policies, thread pools, database connections) for improved efficiency. Required Skills & Experience Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. 3–8 years of experience in Performance Testing/Engineering. Hands-on expertise with Locust, JMeter, or equivalent load testing tools. Strong experience with AWS services such as EC2, ALB/NLB, CloudWatch, EKS/ECS, S3, etc. Familiarity with Grafana, Prometheus, and APM tools like Datadog, New Relic, or similar. Strong understanding of system metrics: CPU, memory, disk I/O, network throughput, etc. Proficiency in scripting and automation (Python preferred) for custom test scenarios and analysis. Experience with testing and profiling REST APIs, web services, and microservices-based architectures. Exposure to chaos engineering tools (e.g., Gremlin, Chaos Mesh, Litmus) or fault injection practices. Experience with CI/CD tools (e.g., Jenkins, GitLab CI) and integrating performance tests into build pipelines. Nice to Have Experience with Kubernetes-based environments and container orchestration. Knowledge of infrastructure-as-code tools (Terraform, CloudFormation). Background in network performance testing and traffic simulation. Experience in capacity planning and infrastructure cost optimization. About Us Tarana’s mission is to accelerate the deployment of fast, affordable internet access around the world. Through a decade of R&D and more than $400M of investment, the Tarana team has created a unique next-generation fixed wireless access (ngFWA) technology instantiated in its first commercial platform, Gigabit 1 (G1). It delivers a game-changing advance in broadband economics in both mainstream and underserved markets, using either licensed or unlicensed spectrum. G1 started production in mid-2021 and has since been embraced by more than 250 service providers in 19 countries and 41 US states. Tarana is headquartered in Milpitas, California, with additional research and development in Pune, India. Visit our website for more on G1.
Posted 1 week ago
0 years
0 Lacs
Hyderābād
On-site
Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Lead Consultant - Oracle Cloud Infrastructure Administrator This position is for a technical hands-on consultant to assist with planning, designing, executing and administration of Oracle Cloud Infrastructure. Responsibilities Perform administration of OCI virtualization systems using native Oracle Cloud Infrastructure services, performing analysis, tuning, and troubleshooting. Hands on experience on Oracle Cloud technologies i.e. IAM, VCN, Peering, Routing, Fastconnect , Load Balancer setup, Compute, Autoscaling, Block Volume, Backup and restore, File and object Storage, Oracle Databases on OCI and Oracle Autonomous Databases. Perform activities that include crafting and decommissioning systems as part of administration activities. Work with application, server, and database engineers to perform vital troubleshooting activities and performing tuning and scaling activities. Assist with migration of environments as necessary across datacenters and assist with Cloud based initiatives. Responsible for systems maintenance, system upgrades, infrastructure design and layout, DR design and implementation, physical to virtual migrations. Work closely with Product Development teams and provide feedback to improve product quality Develop and maintain Standard Operating Procedures/documentation Provide partner concern support for database related issues Coordinate OCI compute Manage OCI compute instances Experience in deploying and migrating software computing infrastructure like storage networking compute applications middleware security migration of on premise workloads to Oracle Cloud Infrastructure Experience in migration Virtual Machines from On premise NTT infrastructure to Oracle Cloud IaaS Should have experience in setup Cloud Network Firewall Certificates VLBR VCN IP Addresses Security Rules Management of OCI IAM Cloud User policy role compartment and access management Cloud security management Manage the service instances maintain capacity and schedules notifications and alerts Patch and Upgrade cloud instances Knowledge of Oracle Cloud IaaS PaaS products and solutions Coordination with client infrastructure and networking teams Project Identity IDCS configuration for SSO and federation password policy management MFA management Nice to have Expertise in managing ERP databases in a cloud environment, preferable RDS. Proficient in Oracle Apps, RAC, ASM, Data Guard, Oracle Cluster ware, RMAN & OEM Proficiency in Unix shell script . Should have sound communication skills Certification in Oracle Cloud Infrastructure is helpful Experience in OCI admin with AWS & Azure Qualifications Bachelors Degree in Computer Science, Information Systems, Engineering, related fields or equivalent professional experience Preferred qualification Very good written and presentation / verbal communication skills with experience of customer interfacing role. In-depth requirement understanding skills with good analytical and problem solving ability, interpersonal efficiency, and positive attitude Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career—Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transpar ency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, re ligion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 21, 2025, 8:29:37 AM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a highly skilled LLM Engineer with a minimum of 3 to 6 years of experience in software development and 1-2 years in LLM solution development,. The ideal candidate will have strong experience working with Python, LLM solution patterns and tools (RAG, Vector DB, Agentic workflows, LoRA, etc.) cloud platforms (AWS, Azure, GCP), and DevOps tools. They will be responsible for designing and developing scalable software solutions, leading architecture design, and ensuring the performance and reliability of our systems. Responsibilities: • Take ownership of architecture design and development of scalable and distributed software systems. • Translate business to technical requirements • Own technical execution, ensuring code quality, adherence to deadlines, and efficient resource allocation • Data driven decision making skills with focus on achieving product goals • Design, develop and deploy LLM based pipelines involving patterns like RAG, Agentic workflows, PEFT (e.g. LORA, QLORA, etc.) • Responsible for the complete software development lifecycle, including requirements analysis, design, coding, testing, and deployment. • Utilize AWS services/ Azure services like IAM, Monitoring, Load Balancing, Autoscaling, Database, Networking, storage, ECR, AKS, ACR etc. • Implement DevOps practices using tools like Docker, Kubernetes to ensure continuous integration and delivery. Develop DevOps scripts for automation and monitoring. • Collaborate with cross-functional teams, conduct code reviews, and provide guidance on software design and best practices. Qualifications : • Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). At least 5 years of experience in software development, with relevant work experience in LLM app development. • Strong coding skills with proficiency in Python and Javascript • Experience with API frameworks both stateless and stateful such as Fast API, Django • Well versed in implementation of web sockets, gRPC, access management using JWT (Azure AD, IDM preferred) • Proficient in cloud platforms, specifically AWS, Azure, or GCP • Knowledge and hands-on experience with front-end development (React JS, Next JS, Tailwind CSS) preferred • Strong experience in LLM patterns like RAG, Vector DB, Hybrid Search, Agent development, Agentic workflows, prompt engineering, etc. • Strong experience with LLM APIs (Open AI, Anthropic, AWS Bedrock), SDKs (Langchain, DSPy) • Hands-on experience with DevOps tools including Docker, Kubernetes, and AWS services (Redshift, RDS, S3). • Experience in production deployments involving thousands of users • Strong understanding of scalable application design principles and experience with security best practices and compliance with privacy regulations. • Good knowledge of software engineering practices like version control (GIT), DevOps (Azure DevOps preferred) and Agile or Scrum. • Strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. • Experience of SDLC and best practices while development • Experience with Agile methodology for continuous product development and delivery
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, _VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Core Competencies Excellent knowledge on EKS, Kubernetes and its related AWS Component. Kubernetes Networking Kubernetes DevOps which includes Deployment of Kubernetes – EKS Cluster using IaaC (Terraform) and CI/CD pipeline. EKS Secret Management, Autoscaling and Lifecycle Management. EKS Security using AWS Native Services. Excellent Understanding on AWS cloud services like VPC, EC2, ECS, S3, EBS, ELB, Elastic IPs, Security Group etc. AWS Component deployment using Terraform Application Onboarding on Kubernetes using Argocd AWS Codepipeline, Codebuild, Code Commit HashiCorp Stack, HasiCorp Packer. Bitbucket and Git, Profound Cloud Technology, Network, Security and Platform Expertise (AWS or Google Cloud or Azure) Good documentation and communication skills. Good Understanding on ELK, Cloudwatch, datadog Roles & Responsibilites Manage project driven integration and day-to-day administration of cloud solutions Develop prototypes, designing and building modules and solutions for Cloud Platforms in an iterative agile cycles, develop, maintain, and optimize the business outcome Conduct peer reviews and maintain coding standards Driving automation using CI/CD using Jenkins or argcd Driving Cloud solution automation and integration activity for Cloud Provider - AWS and Tenant (Project) workloads. Build and deploy AWS cloud infrastructure by using cloud formation and terraform scripts. Use Ansible & Python to perform routines tasks like user management and security hardening, etc. Providing professional technical consultancy to migrate and transform existing on-premises applications to public cloud and support to all Cloud-related programmes and existing environments Design and deploy direct connect network between AWS and datacentre. Train and develop AWS expertise within the organisation. Proven troubleshooting skills to resolve issues related with cloud network, storage and performance management. VOIS Equal Opportunity Employer Commitment VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion , Top 10 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!
Posted 2 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
As a Lead Software Engineer – Performance Engineering , you will drive the strategy, design, and execution of performance engineering initiatives across highly distributed systems. You will lead technical efforts to ensure reliability, scalability, and responsiveness of business-critical applications. This role requires deep technical expertise, hands-on performance testing experience, and the ability to mentor engineers while collaborating cross-functionally with architecture, SRE, and development teams. Responsibilities: Define, implement, and enforce SLAs, SLOs, and performance benchmarks for large-scale systems. Lead performance testing initiatives including load, stress, soak, chaos, and scalability testing. Design and build performance testing frameworks integrated into CI/CD pipelines. Analyze application, infrastructure, and database metrics to identify bottlenecks and recommend optimizations. Collaborate with cross-functional teams to influence system architecture and improve end-to-end performance. Guide the implementation of observability strategies using monitoring and APM tools. Optimize cloud infrastructure (e.g., autoscaling, caching, network tuning) for cost-efficiency and speed. Tune databases and messaging systems (e.g., PostgreSQL, Kafka, Redis) for high throughput and low latency. Mentor engineers and foster a performance-first culture across teams. Lead incident response and postmortem processes related to performance issues. Drive continuous improvement initiatives using data-driven insights and operational feedback. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 8+ years of experience in software/performance engineering, with 2+ years in a technical leadership role. Expertise in performance testing tools such as JMeter, k6, Gatling, or Locust. Strong knowledge of distributed systems, cloud-native architecture, and microservices. Proficiency in scripting and automation using Python, Go, or Shell. Experience with observability and APM tools (e.g., Datadog, Prometheus, New Relic, AppDynamics). Deep understanding of SQL performance, caching strategies, and tuning for systems like PostgreSQL and Redis. Familiarity with CI/CD pipelines, container orchestration, and IaC tools (e.g., Kubernetes, Terraform). Strong communication skills and experience mentoring and leading technical teams. Ability to work cross-functionally and make informed decisions in high-scale, production environments.
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Job description Company Description Evallo is a leading provider of a comprehensive SaaS platform for tutors and tutoring businesses, revolutionizing education management. With features like advanced CRM, profile management, standardized test prep, automatic grading, and insightful dashboards, we empower educators to focus on teaching. We're dedicated to pushing the boundaries of ed-tech and redefining efficient education management. Why this role matters Evallo is scaling from a focused tutoring platform to a modular operating system for all service businesses that bill by the hour. As we add payroll, proposals, whiteboarding, and AI tooling, we need a Solution Architect who can translate product vision into a robust, extensible technical blueprint. You’ll be the critical bridge between product, engineering, and customers, owning architecture decisions that keep us reliable at 5k+ concurrent users and cost-efficient at 100k+ total users. Outcomes we expect Map current backend + frontend, flag structural debt, and publish an Architecture Gap Report Define naming & layering conventions, linter / formatter rules, and a lightweight ADR process Ship reference architecture for new modules Lead cross-team design reviews; no major feature ships without architecture sign-off The eventual goal is to have Evallo run in a fully observable, autoscaling environment with < 10 % infra cost waste. Monitoring dashboards should trigger < 5 false positives per month. Day-to-day Solution Design: Break down product epics into service contracts, data flows, and sequence diagrams. Choose the right patterns—monolith vs. microservice, event vs. REST, cache vs. DB index—based on cost, team maturity, and scale targets. Platform-wide Standards: Codify review checklists (security, performance, observability) and enforce via GitHub templates and CI gates. Champion a shift-left testing mindset; critical paths reach 80 % automated coverage before QA touches them. Scalability & Cost Optimization: Design load-testing scenarios that mimic 5 k concurrent tutoring sessions; guide DevOps on autoscaling policies and CDN strategy. Audit infra spend monthly; recommend serverless, queuing, or data-tier changes to cut waste. Release & Environment Strategy: Maintain clear promotion paths: local → sandbox → staging → prod with one-click rollback. Own schema-migration playbooks; zero-downtime releases are the default, not the exception. Technical Mentorship: Run fortnightly architecture clinics; level-up engineers on domain-driven design and performance profiling. Act as a tie-breaker on competing technical proposals, keeping debates respectful and evidence-based. Qualifications 3+ yrs engineering experience, 2+ yrs in a dedicated architecture or staff-level role on a high-traffic SaaS product. Proven track record designing multi-tenant systems that scaled beyond 50 k users or 1k RPM. Deep knowledge of Node.js / TypeScript (our core stack), MongoDB or similar NoSQL, plus comfort with event brokers (Kafka, NATS, or RabbitMQ). Fluency in AWS (preferred) or GCP primitives—EKS, Lambda, RDS, CloudFront, IAM. Hands-on with observability stacks (Datadog, New Relic, Sentry, or OpenTelemetry). Excellent written communication; you can distill technical trade-offs in one page for execs and in one diagram for engineers.
Posted 2 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Jivi Jivi is transforming primary healthcare with an AI-powered clinical agentic platform designed for 8 billion people. Our flagship product, a super health app, combines an AI doctor and longevity coach. It provides a full-stack solution covering sickness, prevention, and wellness. In just six months, 500,000 users from 170+ countries have already used Jivi. The company was founded by Ankur Jain (BharatPe, WalmartLabs, Stanford), GV Sanjay Reddy (Reddy Ventures, Aragen), and Andrew Ng 's AI Fund (Coursera, DeepLearning). Together, they bring deep expertise in AI, medicine, and scaling billion-dollar ventures. Jivi is powered by groundbreaking research in Large Language Models (LLMs). Our MedX model is ranked #1 globally, surpassing OpenAI and Google Gemini in diagnostic accuracy. Additionally, our AudioX model boasts the lowest word error rate for Indic languages. Jivi’s health knowledge base, one of the largest in the world, plays a key role in training these models. In the spirit of fostering innovation, we’ve open-sourced these models on Hugging Face for the AI community. Jivi has been recognized for its innovation with awards such as the NASSCOM’s Digital Adoption Pioneer Award and the IndiaAI Mission Award. We are proud to be a global leader in AI healthcare. Job Overview We are looking for a skilled DevOps Engineer to join our growing engineering team. You will be responsible for supporting and managing cloud infrastructure, CI/CD pipelines, and Kubernetes-based workloads primarily on AWS. This is a hands-on role that requires solid experience in DevOps best practices, cloud troubleshooting, and automation. Key Responsibilities Manage and support AWS infrastructure services such as EC2, EKS, RDS, S3, IAM, etc. Handle day-to-day operations of Kubernetes (EKS) including pods, services, volumes, autoscaling, and cluster maintenance. Design, implement, and maintain CI/CD pipelines using tools like GitHub Actions, ArgoCD, etc. Implement Infrastructure as Code (IaC) using Terraform for reproducible and auditable infrastructure deployments. Support and troubleshoot Linux servers, Docker containers, Git workflows, and shell scripting tasks. Monitor and analyze logs and metrics for performance and incident troubleshooting. Technical Skills Strong hands-on experience with Linux, Docker, Git, and basic shell scripting Familiarity with cloud troubleshooting in AWS environments Understanding of CI/CD principles and experience with Git-based workflows Basic knowledge of logs, metrics, and monitoring tools Good to Have AWS certifications (e.g., AWS Certified Solutions Architect, DevOps Engineer) Experience with AI/ML workloads including GPU/NVIDIA-based training/inferencing environments Exposure to DevSecOps tools such as Bandit, SonarQube, etc. Familiarity with compliance frameworks like ISO 27001, HIPAA, or experience working in regulated environments Experience and Qualifications Work Experience: Minimum 3+ years of experience in DevOps Engineering. Education: Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience). Why Work with Jivi? Make a Global Impact : Shape AI-driven healthcare solutions that transform billions of lives and revolutionize global wellness. Accelerate Your Career : Enjoy competitive salaries, growth opportunities, and the chance to take on leadership roles as Jivi scales. Lead in a High-Growth Environment : Own key initiatives, influence company strategy, and drive impactful health projects in a dynamic, fast-paced setting. Collaborate with the Best : Work alongside top professionals in AI and healthcare, learn from experts, and contribute to breakthrough innovations. Jivi’s Products: Jivi is available as a mobile app or as an AI assistant on WhatsApp. You can access Jivi: iOS app Android app WhatsApp Jivi in Media: Economic Times - https://tinyurl.com/m3kep5at Reuters - https://tinyurl.com/mpcs6dpx Inc42 - https://tinyurl.com/emsdas55 Many more - https://www.jivi.ai/news
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
You should have expert-level proficiency in Python and Python frameworks or Java. You must have hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, CloudFormation, Docker, ECS, Fargate, and ECR. Deep experience with key AWS services like Compute (PySpark, Lambda, ECS), Storage (S3), Databases (DynamoDB, Snowflake), Networking (VPC, 53, CloudFront, API Gateway), DevOps/CI-CD (CloudFormation, CDK), Security (IAM, KMS, Secrets Manager), Monitoring (CloudWatch, X-Ray, CloudTrail), and NoSQL Databases like Cassandra, PostGreSQL is required. You should have very strong hands-on knowledge of using Python for integrations between systems through different data formats. Expertise in deploying and maintaining applications in AWS, along with hands-on experience in Kinesis streams and Auto-scaling, is essential. Designing and implementing distributed systems and microservices, and following best practices for scalability, high availability, and fault tolerance are key responsibilities. Strong problem-solving and debugging skills are necessary for this role. You should also have the ability to lead technical discussions and mentor junior engineers. Excellent written and verbal communication skills are a must. Comfort working in agile teams with modern development practices and collaborating with business and other teams to understand business requirements and work on project deliverables is expected. Participation in requirements gathering, understanding, designing a solution based on available framework and code, and experience with data engineering tools or ML platforms (e.g., Pandas, Airflow, SageMaker) are required. An AWS certification such as AWS Certified Solutions Architect or Developer is preferred. This position is based in multiple locations including Indore, Mumbai, Noida, Bangalore, Chennai in India. Qualifications: - Bachelor's degree or foreign equivalent required from an accredited institution. Consideration will be given to three years of progressive experience in the specialty in lieu of every year of education. - At least 8+ years of Information Technology experience.,
Posted 2 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. What will you do at Fynd? Build scalable services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements You know how to take ownership on things and get it done end to end You have prior experience developing and working on consumer-facing web/app products Hands-on experience in Python, know in depth on asyncio, generators and use case in event driven scenario Through knowledge of async programming using Callbacks, Promises, and Async/Await Someone from an Andriod development background would be preferred. Good Working knowledge of MongoDB, Redis, PostgreSQL Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3 Experience with Frontend Stack would be added advantage (HTML, CSS) You might not have experience with all the tools that we use but you can learn those given the guidance and resources What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 2 weeks ago
14.0 years
3 - 5 Lacs
Hyderābād
On-site
Country India Working Schedule Full-Time Work Arrangement Hybrid Relocation Assistance Available No Posted Date 16-Jul-2025 Job ID 10136 Description and Requirements Job Responsibilities Database Management and Administration: Lead the administration, monitoring, and maintenance of IBM DB2(UDB/LUW) databases, ensuring high availability and optimal performance. Perform regular database backups, restores, and disaster recovery planning. Monitor and troubleshoot database performance issues, optimizing queries and database structure. Design and implement database solutions, upgrades, and patches. Backup and Recovery Management: Implement and manage comprehensive backup strategies for IBM DB2(UDB/LUW) databases, both on-premises and in the cloud using backup tools( Rubrik,) Conduct disaster recovery exercises and ensure business continuity in the event of data loss or system failures. Performance Tuning and Optimization: Conduct database performance assessments, identifying and resolving bottlenecks. Optimize queries, indexes, and other database objects to improve system efficiency. Monitor resource usage (CPU, memory, disk) and implement strategies to ensure resource optimization. Cloud Integration: Manage IBM DB2(UDB/LUW) database instances deployed in cloud environments such as AWS, Azure, or IBM DB2(UDB/LUW) Cloud Infrastructure. Ensure proper database configuration, migration, and optimization within the cloud infrastructure. Implement cloud-specific features such as autoscaling, disaster recovery, and security measures for cloud databases. Collaborate with cloud architects to design scalable, secure, and cost-effective cloud database architectures. Database programming skills: Very good experience in Database programming skills for designing and coding, optimizing and tuning the SQL and PL/SQL queries. Strong knowledge in debug the code and provide appropriate solution to developers and application team. Automation Implementation: Lead the design, implementation and maintenance of automated infrastructure solutions using Infrastructure as Code tools like Ansible, Elastic and Terraform. Lead the develop and management of Azure DevOps CI/CD pipelines to automate infrastructure deployments using Ansible and Elastic. Automate database health checks, monitoring and alerting system to proactively address potential issues. Security and Compliance: Implement robust database security policies to safeguard data, including access control, encryption, and auditing. Ensure compliance with data privacy laws and regulations such as GDPR, HIPAA, or SOC 2. Conduct periodic security audits and vulnerability assessments on IBM DB2(UDB/LUW) databases. Collaboration: Work closely with cross-functional teams, including developers, system administrators, network engineers, and security specialists. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. IBM DB2(UDB/LUW) Certified DBA and Cloud certification (Azure,AWS or OCI) is preferrable Experience (In Years) 14+ Years Total IT experience & 10+ Years relevant experience in UDB database Technical Skills 10+ years of strong work experience with database design, installation configuration and implementation; knowledge of all key IBM DB2/LUW utilities such as HADR, Reorg, run stats, Load on (Linux/Unix/Windows) Expertise in Unix and Linux operating systems and shell scripting. Expertise in database migration, Upgradation and Patching. Strong experience in cloud computing (Azure, AWS RDS, IBM Cloud PAK). Experience administering IBM Informix databases is a Big Plus. Experience with backups, restores and recovery models and implementation of Backup strategies mandatory including RTO and RPO with tools (Rubrik,Networker,BCV). Very good experience in managing of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Experience in IBM db2 LUW replication (Db2 SQL replication and Q Replication, a Queue -based Replication) as well as Using Third party tools for Replications. Experience with Performance Tuning and Optimization (PTO), using native monitoring and troubleshooting tools (Explain plan, Db2 reorg, Db2 run stats). Strong knowledge of Clustering, High Availability (HA/HADR) and Disaster Recovery (DR) options for DB2 LUW. Strong Knowledge of data security (User Access, Groups and Roles) and Data encryption (at rest and in transit) for DB2 LUW. Should have ability to work closely with IBM-PMR to resolve any ongoing production issues. Experience in Cloud environment especially in IBM Cloud, Azure Cloud is Big Plus Good knowledge on ITIL Methodologies like Change, Incident, Problem, Service Management using ServiceNow tools. Strong database analytical skills to improve application and database performance. Automation tools and programming skills such as Ansible, Perl, and Shell scripting. Strong knowledge Database monitoring with Observability tools (Elastic) Understanding of modern IT infrastructure such as Cloud Architecture as well as Agile DevOps Framework. Participates in a 24X7 pager rotation, providing Subject Matter Expert support to the on call DBA as needed Strong Experience managing geographically distributed and culturally diverse workgroups with strong team management, leadership and coaching skills Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Ability to work 24*7 rotational shift to support for production, development, and test databases Other Critical Requirements Project management experience is required to follow the Agile methodology to improve and deliver the project or operational excellence. DB2/LUW database administration/advanced database administration for experts certification is preferred. Demonstrate ability to work independently and in a team environment Ability to work 24*7 rotational shift to support for production, development, and test databases About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible. Join us!
Posted 2 weeks ago
2.5 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role Overview We are seeking a highly skilled Full Stack Developer with at least 2.5 years of experience to join our product engineering team. In this role, you will be responsible for building and maintaining a microservices-based multi-tenant framework using the MERN stack, while also enabling seamless integrations with Python-based applications that leverage Retrieval-Augmented Generation (RAG), Large Language Models (LLMs), and the Azure AI Library. Key Responsibilities Design, develop, and maintain microservices using Node.js/Express.js in a multi-tenant architecture. Build end-to-end solutions using MERN stack (MongoDB, Express.js, React.js, Node.js). Integrate front-end components with backend logic for seamless performance and UX. Develop and integrate with Python-based services and APIs (Flask/FastAPI) that implement RAG workflows and LLMs. Leverage the Azure AI Library for embedding AI-powered features into applications. Implement and maintain RESTful APIs for internal services and external third-party integrations. Optimize backend performance with efficient code, data structures, and caching mechanisms. Use Azure (or AWS) cloud services for deployment, monitoring, autoscaling, and system health. Work with both relational (MySQL/PostgreSQL) and NoSQL (MongoDB) databases to manage distributed data. Follow DevOps practices, CI/CD pipelines, and containerization (Docker; Kubernetes is a plus). Ensure data security, system availability, and compliance in a cloud-native environment. Conduct code reviews, debug issues, and optimize applications for performance and scalability. Collaborate closely with product managers, designers, and cross-functional engineering teams. Required Qualifications Bachelors degree in Computer Science, Information Technology, or related field. Minimum of 2.5 years experience as a Full Stack or Backend Developer. Hands-on expertise in MERN stack (MongoDB or MySQL, Express.js, React.js, Node.js). Experience with Microservices architecture and REST API development. Practical exposure to Python web frameworks like Flask or FastAPI. Experience integrating or developing AI capabilities using RAG, LLMs, or Azure AI Services. Familiarity with cloud platforms (Azure preferred; AWS/GCP acceptable). Working knowledge of containerization tools (Docker; Kubernetes is a plus). Proficient in database design, performance tuning, and managing distributed data systems. Strong understanding of multi-tenant architecture, application security, and scalability. Version control experience (preferably Git). Excellent communication, collaboration, and analytical skills (ref:hirist.tech)
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Overview Job Title: Cloud Engineer Location: Bangalore , India Corporate Title: AVP Role Description A Google Cloud Platform (GCP) Engineer is responsible for designing, implementing, and managing cloud infrastructure and services on Google Cloud. Here’s a detailed role description in points: What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities: Cloud Infrastructure Management – Design, deploy, and manage scalable, secure, and cost-effective cloud environments on GCP. Automation & Scripting – Develop Infrastructure as Code (IaC) using Terraform, Deployment Manager, or other tools. Security & Compliance – Implement security best practices, IAM policies, and ensure compliance with organizational and regulatory standards. Networking & Connectivity – Configure and manage VPCs, subnets, firewalls, VPNs, and interconnects for secure cloud networking. CI/CD & DevOps – Set up CI/CD pipelines using Cloud Build, Jenkins, GitHub Actions, or similar tools for automated deployments. Monitoring & Logging – Implement monitoring and alerting using Stackdriver (Cloud Operations), Prometheus, or third-party tools. Cost Optimization – Analyze and optimize cloud spending by leveraging committed use discounts, autoscaling, and right-sizing resources. Disaster Recovery & Backup – Design backup, high availability, and disaster recovery strategies using Cloud Storage, Snapshots, and multi-region deployments. Database Management – Deploy and manage GCP databases like Cloud SQL, BigQuery, Firestore, and Spanner. Containerization & Kubernetes – Deploy and manage containerized applications using GKE (Google Kubernetes Engine) and Cloud Run. Team / division overview: The Platform Engineering Team is responsible for building and maintaining the foundational infrastructure, tooling, and automation that enable efficient, secure, and scalable software development and deployment. The team focuses on creating a self-service platform for developers and operational teams, ensuring reliability, security, and compliance while improving developer productivity. Design and manage scalable, secure, and cost-effective cloud infrastructure (GCP, AWS, Azure). Implement Infrastructure as Code (IaC) using Terraform Implement security best practices for IAM, networking, encryption, and secrets management. Ensure regulatory compliance (SOC 2, ISO 27001, PCI-DSS) by automating security checks. Manage API gateways, service meshes, and secure service-to-service communication.. Enable efficient workload orchestration using Kubernetes, serverless Your skills and experience: Strong experience with GCP services like Compute Engine, Cloud Storage, IAM, Networking, Kubernetes, and Serverless technologies. Proficiency in scripting (Python, Bash) and Infrastructure as Code (Terraform, CloudFormation). Knowledge of DevOps practices, CI/CD tools, and GitOps workflows. Understanding of security, IAM, networking, and compliance in cloud environments. Experience with monitoring tools like Stackdriver, Prometheus, or Datadog. Strong problem-solving skills and ability to troubleshoot cloud-based infrastructure. Google Cloud certifications (e.g., Associate Cloud Engineer, Professional Cloud Architect, or Professional DevOps Engineer) are a plus. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Job description Company Description Evallo is a leading provider of a comprehensive SaaS platform for tutors and tutoring businesses, revolutionizing education management. With features like advanced CRM, profile management, standardized test prep, automatic grading, and insightful dashboards, we empower educators to focus on teaching. We're dedicated to pushing the boundaries of ed-tech and redefining efficient education management. Why this role matters Evallo is scaling from a focused tutoring platform to a modular operating system for all service businesses that bill by the hour. As we add payroll, proposals, white-boarding, and AI tooling, we need a Solution Architect who can translate product vision into a robust, extensible technical blueprint. You’ll be the critical bridge between product, engineering, and customers—owning architecture decisions that keep us reliable at 5k+ concurrent users and cost-efficient at 100k+ total users. Outcomes we expect Map current backend + frontend, flag structural debt, and publish an Architecture Gap Report Define naming & layering conventions, linter / formatter rules, and a lightweight ADR process Ship reference architecture for new modules Lead cross-team design reviews; no major feature ships without architecture sign-off Eventual goal is to have Evallo run in a fully observable, autoscaling environments with < 10 % infra cost waste. Monitoring dashboards should trigger < 5 false positives per month. Day-to-day Solution Design: Break down product epics into service contracts, data flows, and sequence diagrams. Choose the right patterns—monolith vs. microservice, event vs. REST, cache vs. DB index—based on cost, team maturity, and scale targets. Platform-wide Standards: Codify review checklists (security, performance, observability) and enforce via GitHub templates and CI gates. Champion a shift-left testing mindset; critical paths reach 80 % automated coverage before QA touches them. Scalability & Cost Optimization: Design load-testing scenarios that mimic 5 k concurrent tutoring sessions; guide DevOps on autoscaling policies and CDN strategy. Audit infra spend monthly; recommend serverless, queuing, or data-tier changes to cut waste. Release & Environment Strategy: Maintain clear promotion paths: local → sandbox → staging → prod with one-click rollback. Own schema-migration playbooks; zero-downtime releases are the default, not the exception. Technical Mentorship: Run fortnightly architecture clinics; level-up engineers on domain-driven design and performance profiling. Act as tie-breaker on competing technical proposals, keeping debates respectful and evidence-based. Qualifications 5+ yrs engineering experience, 2+ yrs in a dedicated architecture or staff-level role on a high-traffic SaaS product. Proven track record designing multi-tenant systems that scaled beyond 50 k users or 1k RPM. Deep knowledge of Node.js / TypeScript (our core stack), MongoDB or similar NoSQL, plus comfort with event brokers (Kafka, NATS, or RabbitMQ). Fluency in AWS (preferred) or GCP primitives—EKS, Lambda, RDS, CloudFront, IAM. Hands-on with observability stacks (Datadog, New Relic, Sentry, or OpenTelemetry). Excellent written communication; you can distill technical trade-offs in one page for execs and in one diagram for engineers.
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
You should possess expert-level proficiency in Python and Python frameworks or Java. Additionally, you must have hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, CloudFormation, Docker, ECS, Fargate, and ECR. Your deep experience should cover key AWS services such as Compute (PySpark, Lambda, ECS), Storage (S3), Databases (DynamoDB, Snowflake), Networking (VPC, 53, CloudFront, API Gateway), DevOps/CI-CD (CloudFormation, CDK), Security (IAM, KMS, Secrets Manager), and Monitoring (CloudWatch, X-Ray, CloudTrail). Moreover, you should be proficient in NoSQL Databases like Cassandra, PostgreSQL, and have strong hands-on knowledge of using Python for integrations between systems through different data formats. Your expertise should extend to deploying and maintaining applications in AWS, with hands-on experience in Kinesis streams and Auto-scaling. Designing and implementing distributed systems and microservices, scalability, high availability, and fault tolerance best practices are also key aspects of this role. You should have strong problem-solving and debugging skills, with the ability to lead technical discussions and mentor junior engineers. Excellent communication skills, both written and verbal, are essential. You should be comfortable working in agile teams with modern development practices, collaborating with business and other teams to understand business requirements and work on project deliverables. Participation in requirements gathering and understanding, designing solutions based on available frameworks and code, and experience with data engineering tools or ML platforms (e.g., Pandas, Airflow, SageMaker) are expected. An AWS certification (AWS Certified Solutions Architect or Developer) would be advantageous. This position is based in multiple locations in India, including Indore, Mumbai, Noida, Bangalore, and Chennai. To qualify, you should hold a Bachelor's degree or a foreign equivalent from an accredited institution. Alternatively, three years of progressive experience in the specialty can be considered in lieu of each year of education. A minimum of 8+ years of Information Technology experience is required for this role.,
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, _VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Core Competencies Excellent knowledge on EKS, Kubernetes and its related AWS Component. Kubernetes Networking Kubernetes DevOps which includes Deployment of Kubernetes – EKS Cluster using IaaC (Terraform) and CI/CD pipeline. EKS Secret Management, Autoscaling and Lifecycle Management. EKS Security using AWS Native Services. Excellent Understanding on AWS cloud services like VPC, EC2, ECS, S3, EBS, ELB, Elastic IPs, Security Group etc. AWS Component deployment using Terraform Application Onboarding on Kubernetes using Argocd AWS Codepipeline, Codebuild, Code Commit HashiCorp Stack, HasiCorp Packer. Bitbucket and Git, Profound Cloud Technology, Network, Security and Platform Expertise (AWS or Google Cloud or Azure) Good documentation and communication skills. Good Understanding on ELK, Cloudwatch, datadog Roles & Responsibilites Manage project driven integration and day-to-day administration of cloud solutions Develop prototypes, designing and building modules and solutions for Cloud Platforms in an iterative agile cycles, develop, maintain, and optimize the business outcome Conduct peer reviews and maintain coding standards Driving automation using CI/CD using Jenkins or argcd Driving Cloud solution automation and integration activity for Cloud Provider - AWS and Tenant (Project) workloads. Build and deploy AWS cloud infrastructure by using cloud formation and terraform scripts. Use Ansible & Python to perform routines tasks like user management and security hardening, etc. Providing professional technical consultancy to migrate and transform existing on-premises applications to public cloud and support to all Cloud-related programmes and existing environments Design and deploy direct connect network between AWS and datacentre. Train and develop AWS expertise within the organisation. Proven troubleshooting skills to resolve issues related with cloud network, storage and performance management. VOIS Equal Opportunity Employer Commitment VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion , Top 10 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role: Python+DevOps Experience: 5+years Location: Bangalore Budget: 2 LPM Job Description: You'll architect and scale document processing pipelines that handle thousands of financial documents daily, ensuring high availability and cost efficiency. What You'll Do ⦁ Build scalable async processing pipelines for document classification, extraction, and validation ⦁ Optimize cloud infrastructure costs while maintaining 99.9% uptime for document processing workflows ⦁ Design and implement APIs for document upload, processing status, and results retrieval ⦁ Manage Kubernetes deployments with autoscaling based on document processing load ⦁ Implement monitoring and observability for complex multistage document workflows ⦁ Optimize database performance for high-volume document metadata and processing results ⦁ Build CI/CD pipelines for safe deployment of processing algorithms and business rules Technical Requirements Must Have: ⦁ 5+ years backend development (Python or Go) ⦁ Strong experience with async processing (Celery, Temporal, or similar) ⦁ Docker containerization and orchestration ⦁ Cloud platforms (AWS/GCP/Azure) with cost optimization experience ⦁ API design and development (REST/GraphQL) ⦁ Database optimization (MongoDB, PostgreSQL) ⦁ Production monitoring and debugging Nice to Have: ⦁ Kubernetes experience ⦁ Experience with document processing or ML pipelines ⦁ Infrastructure as Code (Terraform/CloudFormation) ⦁ Message queues (SQS, RabbitMQ, Kafka) ⦁ Performance optimization for high-throughput systems Interested candidate can apply through - https://thexakal.com/share-job?jobId=686e09563a69611b52ad693f
Posted 2 weeks ago
3.0 - 7.0 years
0 - 0 Lacs
ahmedabad, gujarat
On-site
As a Cloud Administrator (AWS), your primary responsibility will be to install, support, and maintain cloud/on-premise server infrastructure while ensuring optimal performance and availability of services. You will need to have a solid working knowledge of Kubernetes to manage Kubernetes clusters of Linux on AWS. Your role will also involve participating in calls, performing quality audits, building a knowledge database, engaging with clients, and providing training to the team. It is essential to demonstrate a combination of technical expertise and interpersonal skills to excel in this position. Your duties and responsibilities will include answering technical queries through various channels, logging all issues and resolutions, performing Linux server administration and configuration, maintaining system security, installing, configuring, and fine-tuning cloud infrastructure, monitoring performance, troubleshooting incidents and outages, and ensuring system security through access controls and backups. You will also be responsible for upgrading systems, monitoring backups, training staff on new technologies, maintaining technical documentation, providing 24/7 technical support, and contributing to IT team meetings. To be successful in this role, you should have at least 2+ years of international experience in configuring, managing, and automating cloud environments (AWS/Azure) along with an additional 3+ years of Linux experience. You should be familiar with Elastic Load Balancers, auto-scaling, Virtual Private Cloud, routing, cloud databases, IAM, ACM, and SSM. Strong knowledge of networking principles, virtualization administration, scripting, multi-tier system configurations, disaster recovery, and data integrity is crucial. Additionally, you must possess excellent analytical, problem-solving, communication, organizational, and time-management skills. The ideal candidate will hold a Bachelor's degree in Computer Science, Information Technology, or a related field and relevant certifications such as AWS Cloud Practitioner, AWS Solution Associate, Red Hat Certified System Administrator/Engineer, and ITIL Knowledge. A willingness to learn new technologies, follow established procedures, and take ownership of tasks is highly valued. With 3-5 years of experience, you can expect a salary ranging from 40,000 to 60,000 per month. If you meet the qualifications and possess the required skills, we encourage you to apply for this challenging and rewarding position in cloud administration.,
Posted 3 weeks ago
4.0 - 6.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Experience Required: 4-6 years Location: Gurgaon Department: Product and Engineering Working Days: Alternate Saturdays Working (1st and 3rd) 🔧 Key Responsibilities Design, implement, and maintain highly available and scalable infrastructure using AWS Cloud Services. Build and manage Kubernetes clusters (EKS, self-managed) to ensure reliable deployment and scaling of microservices. Develop Infrastructure-as-Code using Terraform, ensuring modular, reusable, and secure provisioning. Containerize applications and optimize Docker images for performance and security. Ensure CI/CD pipelines (Jenkins, GitHub Actions, etc.) are optimized for fast and secure deployments. Drive SRE principles including monitoring, alerting, SLIs/SLOs, and incident response. Set up and manage observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Automate routine tasks with scripting languages (Python, Bash, etc.). Lead capacity planning, auto-scaling, and cost optimization efforts across cloud infrastructure. Collaborate closely with development teams to enable DevSecOps best practices. Participate in on-call rotations, handle outages with calm, and conduct postmortems. 🧰 Must-Have Technical Skills Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular, state management, remote backends) AWS (EC2, VPC, S3, RDS, IAM, CloudWatch, ECS/EKS) Linux system administration CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions) Logging & monitoring tools: ELK, Prometheus, Grafana, CloudWatch Site Reliability Engineering practices Load balancing, autoscaling, and HA architectures 💡 Good-To-Have GCP or Azure exposure Service Mesh (Istio, Linkerd) Secrets management (Vault, AWS Secrets Manager) Security hardening of containers and infrastructure Chaos engineering exposure Knowledge of networking (DNS, firewalls, VPNs) 👤 Soft Skills Strong problem-solving attitude; calm under pressure Good documentation and communication skills Ownership mindset with a drive to automate everything Collaborative and proactive with cross-functional teams
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough