Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
7 - 12 Lacs
Pune
Hybrid
BMC is looking for a Senior QA Engineer to join a QE team working on complex and distributed software, developing test plans, executing tests, developing automation & assuring product quality. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: 1. Define and execute comprehensive test strategies for service management platforms and observability pipelines. 2. Develop, maintain, and optimize automated tests covering incident, problem, change management workflows, and observability data (metrics, logs, traces, events). 3. Collaborate with product, engineering, and SRE teams to embed quality throughout service delivery and monitoring processes. 4. Validate the accuracy, completeness, and reliability of telemetry data and alerts used in observability. 5. Drive continuous integration of quality checks into CI/CD pipelines for rapid feedback and deployment confidence. 6. Investigate production incidents using observability tools and testing outputs to support root cause analysis. 7. Mentor and guide junior engineers on quality best practices for service management and observability domains. 8. Generate detailed quality metrics and reports to inform leadership and drive continuous improvement. To ensure youre set up for success, you will bring the following skillset & experience: 1. 5+ years of experience in quality engineering or software testing with a focus on service management and observability. 2. Strong programming and scripting skills (Java, Python, JavaScript, or similar). 3. Hands-on experience with service management tools such as BMC Helix, ServiceNow, Jira Service Management. 4. Proficient in observability platforms and frameworks (Prometheus, Grafana, ELK Stack, OpenTelemetry, Jaeger). 5. Solid understanding of CI/CD processes and tools (Jenkins, GitHub Actions, Azure DevOps). 6. Experience with cloud environments (AWS, Azure, GCP) and container technologies (Docker, Kubernetes). Whilst these are nice to have, our team can help you develop in the following skills: 1. Experience in Site Reliability Engineering (SRE) practices. 2. Knowledge of security and performance testing methodologies. 3. QA certifications such as ISTQB or equivalent.
Posted 1 month ago
2.0 - 3.0 years
7 - 10 Lacs
Hyderabad
Work from Office
AI Ops/Monitoring Specialist openings at Advantum Health Pvt Ltd, Hyderabad. Overview: Were seeking an AI Ops/Monitoring Specialist to ensure the stability, transparency, and performance of AI systems in production. You will monitor, log, and troubleshoot AI and RPA models to ensure continuous reliability and compliance. Key Responsibilities: Monitor AI model health (drift, performance, latency, bias). Build dashboards and alerts using tools like Prometheus, Grafana, or Datadog. Establish SLAs and SLOs for AI/RPA models and pipelines. Collaborate with AI teams to integrate observability into model lifecycles. Document anomalies and assist in root cause analysis and mitigation. Qualifications: Bachelors in Data Science, IT, or a related field. 2+ years in systems monitoring, SRE, or MLOps. Experience with model monitoring tools (e.g., MLflow, Arize, WhyLabs). Familiarity with AI/ML lifecycles and performance metrics. Background in healthcare or compliance-heavy environments is ideal. Ph: 9177078628 Email id: jobs@advantumhealth.com Address: Advantum Health Private Limited, Cyber gateway, Block C, 4th floor Hitech City, Hyderabad. Do follow us on LinkedIn, Facebook, Instagram, YouTube and Threads Advantum Health LinkedIn Page: https://lnkd.in/gVcQAXK3 Advantum Health Facebook Page: https://lnkd.in/g7ARQ378 Advantum Health Instagram Page: https://lnkd.in/gtQnB_Gc Advantum Health India YouTube link: https://lnkd.in/g_AxPaPp Advantum Health Threads link: https://lnkd.in/gyq73iQ6
Posted 1 month ago
3.0 - 7.0 years
5 - 9 Lacs
Chennai
Work from Office
Overview We are looking for a Full-stack Developer and Automation Engineer with knowledge in Cloud, DevOps Tools, Automation, excellent analytical, problem solving and communication skills. You'll need to have Bachelor’s degree or two or more years of work experience Experience working with Front-end and Back-end Technologies for building, enhancing and managing applications Experience working with Backend technologies like Python, DJango, Java, ReactJS, NodeJS, Springboot Experience working with Client-side scripting technologies like JavaScript, JQuery, etc. Experience in advanced SQL/procedures on MySQL/MongoDB/MariaDB/Oracle Experience using AWS Cloud Infrastructure services such as EC2, ALB, RDS, etc. Experience working with serverless technologies like AWS Lambda, Google/Azure Functions Knowledge of SDLC with Devops tools and Agile Development Even Better if you have Experience in Monitoring/Alerting tools and platforms such as Prometheus, Grafana, Catchpoint, New Relic, etc Experience agile practices and tools used in the development (Jira, Confluence, Jenkins, etc.) Experience in code review, quality, performance tuning with problem solving and debugging skills. Experience with Unit testing framework like JUnit, Mokito. Good communication, interpersonal skills and communication skills to clearly articulate and influence stakeholders. Very good problem solving skills. Responsibilities We are looking for a Full-stack Developer and Automation Engineer with knowledge in Cloud, DevOps Tools, Automation, excellent analytical, problem solving and communication skills. You'll need to have Bachelor’s degree or two or more years of work experience Experience working with Front-end and Back-end Technologies for building, enhancing and managing applications Experience working with Backend technologies like Python, DJango, Java, ReactJS, NodeJS, Springboot Experience working with Client-side scripting technologies like JavaScript, JQuery, etc. Experience in advanced SQL/procedures on MySQL/MongoDB/MariaDB/Oracle Experience using AWS Cloud Infrastructure services such as EC2, ALB, RDS, etc. Experience working with serverless technologies like AWS Lambda, Google/Azure Functions Knowledge of SDLC with Devops tools and Agile Development Even Better if you have Experience in Monitoring/Alerting tools and platforms such as Prometheus, Grafana, Catchpoint, New Relic, etc Experience agile practices and tools used in the development (Jira, Confluence, Jenkins, etc.) Experience in code review, quality, performance tuning with problem solving and debugging skills. Experience with Unit testing framework like JUnit, Mokito. Good communication, interpersonal skills and communication skills to clearly articulate and influence stakeholders. Very good problem solving skills. We are looking for a Full-stack Developer and Automation Engineer with knowledge in Cloud, DevOps Tools, Automation, excellent analytical, problem solving and communication skills. You'll need to have Bachelor’s degree or two or more years of work experience Experience working with Front-end and Back-end Technologies for building, enhancing and managing applications Experience working with Backend technologies like Python, DJango, Java, ReactJS, NodeJS, Springboot Experience working with Client-side scripting technologies like JavaScript, JQuery, etc. Experience in advanced SQL/procedures on MySQL/MongoDB/MariaDB/Oracle Experience using AWS Cloud Infrastructure services such as EC2, ALB, RDS, etc. Experience working with serverless technologies like AWS Lambda, Google/Azure Functions Knowledge of SDLC with Devops tools and Agile Development Even Better if you have Experience in Monitoring/Alerting tools and platforms such as Prometheus, Grafana, Catchpoint, New Relic, etc Experience agile practices and tools used in the development (Jira, Confluence, Jenkins, etc.) Experience in code review, quality, performance tuning with problem solving and debugging skills. Experience with Unit testing framework like JUnit, Mokito. Good communication, interpersonal skills and communication skills to clearly articulate and influence stakeholders. Very good problem solving skills.
Posted 1 month ago
2.0 - 7.0 years
8 - 18 Lacs
Bengaluru
Work from Office
Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com ASAP. Kindly share the below details. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate-? Please let me know, if any of your friends are looking for a job change. Kindly share the references. Please Note: WFO-Work From Office (No hybrid or Work From Home) Shift Details: IST Shift -Rotational Shift-2 ways free cab facility(pick up+drop)+food Key Responsibilities of Grafana Build and maintain a unified Grafana dashboard for monitoring both on-premises and cloud infrastructure. Integrate multiple data sources using mixed queries. Use templating variables and regex filtering to switch between systems and refine data views. Develop workarounds for limited plugin support in legacy systems to ensure complete data visibility. Troubleshoot issues using Grafana Explore; validate data source connections and ensure time zone consistency. Manage API limitations, including rate limits and paginated responses, to ensure reliable data ingestion. Use SQL queries or REST APIs to gather incident data and integrate it into dashboards. Resolve mapping issues by linking incidents to systems with inconsistent naming conventions. Leverage annotations with dynamic tags to highlight the performance impact of incidents. Troubleshoot and optimize Grafana queries for large datasets to minimize performance impact. Key Responsibilities of Moogsoft Set up and manage secure authentication methods, including Basic Auth, OAuth, and token-based authentication, to ensure secure system integration. Design and implement bi-directional synchronization between observability tools and Moogsoft Situations to maintain real-time data consistency. Configure correlation logic in Moogsoft by defining correlation keys and rules to accurately match incidents with events, reducing duplication and improving synchronization. Implement automated incident creation workflows based on alerts from observability tools, ensuring proper generation and assignment based on severity and impacted services. Establish processes for incident updates, define change triggers, and ensure incidents remain aligned with the latest observability insights. Fine-tune Moogsofts AIOps capabilities to enhance noise reduction, minimize false positives, and support accurate root cause analysis. Preferred Qualifications Familiarity with SRE principles (SLIs, SLOs, SLAs). Experience with cloud platforms (AWS, GCP) and cloud-native observability tools. Proficiency in Infrastructure-as-Code (e.g., Terraform) or configuration management tools. Experience as an integration specialist, including configuring scripted REST APIs, customizing JSON payloads, and managing integrations using JSON, XML, and JavaScript. Expertise with monitoring tools (e.g., SolarWinds, Nagios, Dynatrace) and log aggregators (e.g., ELK, Splunk, SentinelOne). Strong experience with ServiceNow ITSM, including Business Rules, Scripted REST APIs, and Web Services. Proficiency in scripting languages such as JavaScript, Python, or Groovy. Thanks and Regards, Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com
Posted 1 month ago
5.0 - 10.0 years
12 - 22 Lacs
Noida, Hyderabad
Hybrid
Strong footprint In the Definition, Design, And Delivery of Java/J2EE applications including knowledge of software Development Life Cycle (SDLC). Knowledge of core and advanced Java programming language with version 8/17 or higher. Experience in Developing and consuming Rest Web Services with Java/J2EE using Spring Boot or Spring Web-Flux. Knowledge of spring web client /hands on would be an added advantage. Knowledge and hands on experience of working in microservices architecture. Experience of Integration with different System of Records - Oracle, SQL, Postgress, DynamoDB, Rest APIs, SOAP APIs etc. Experience of working in a multithreaded application, experience/knowledge of CompletableFuture and Mono & Flux would be an added advantage. Experience of working with Maven for Build and Project management. Should have strong OOAD and Design Patterns understanding and ssits hands-on implementation. Hands on experience with version control tools like GIT. Should possess knowledge in Oracle/IBM DB2 databases. Debugging performance related issues using tools like JConsole, Visual VM etc. Ability to independently code advanced and complex programs in a matrix organization. Ability to test (Test NG/Junit/Mockito) and debug advanced code independently. Exposure to application lifecycle management tools and techniques is desired. Knowledge of cloud infrastructure (preferably AWS) would be an added advantage. Experience of working on data analytical tools like Sumologic, Kibana and Grafana would be an advantage. Experience in delivering projects that meet Quality, Schedule, Milestone and Budget commitments. Ability to develop Creative and Innovative solutions and adjust quickly to shifting priorities, multiple demands, ambiguity, and rapid Change. Strong written and verbal communication skills with ability to communicate with various levels of Management and with experience in translating detailed analysis into high-level business requirements. Strong Interpersonal skills with a history of maintaining good working relationships with Business Partner Teams and other Technology stake holders. Strong logical and analytical skills to solve complex problems. Should be self-motivated and demonstrate high level of commitment.
Posted 1 month ago
2.0 - 7.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. In Autonomous Drive organization we target to deliver an open, scalable, and flexible architecture solution running on Qualcomm® Snapdragon Ride„¢ System on a Chip (SoC) platform. As a Qualcomm Software Engineer for Release Automation, you will: Enable the creation of the Release packages by developing and operating scripts automating all the release steps, from static code analysis reports, testing and traceability reports to generate relevant process artifacts like external Risk Lists, design documents, etc. Use the available toolchain to create and distribute Release packages for every release from your project. Create and maintain visualization dashboards (Grafana, Kibana, or other visualization tools). Work proactively with process engineering to analyze existing processes and identify potential for automation. Contribute to defining a common release process. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Preferred Qualifications and Technical Experience - What you will bring: Bachelors degree in computer science or equivalent. Minimum of 3+ years of relevant work experience. Automation and programming skills in Python Development experience in interfacing REST APIs. Working knowledge of Git, Github and Jira. Familiar with the concept of SBOMs (Software Bill of Materials). Team player with strong problem-solving skills and ability to work independently with little direction. Experience in Agile development methodology such as Scrum or Kanban. Good communication skills for all level of stakeholders, including technical presentations in English. Good to have: Experience with Elasticsearch and Grafana is a plus. Experience with Bazel is a plus. Working knowledge in Software Configuration Management is a plus. Excited about this role, but not sure if you meet 100% of the criteriaWe would still like to hear from you and would welcome your application.
Posted 1 month ago
10.0 - 12.0 years
30 - 37 Lacs
Bengaluru
Work from Office
We need immediate joiners or those who are serving notice period and can join in another 10-15 days. No other candidate i.e. who are on bench or official 3, 2 months NP. Strong working experience in design and development of RESTful APIs using Java, Spring Boot and Spring Cloud. Technical hands-on experience to support development, automated testing, infrastructure and operations Fluency with relational databases or alternatively NoSQL databases Excellent pull request review skills and attention to detail Experience with streaming platforms (real-time data at massive scale like Confluent Kafka). Working experience in AWS services like EC2, ECS, RDS, S3 etc. Understanding of DevOps as well as experience with CI/CD pipelines Industry experience in Retail domain is a plus. Exposure to Agile Methodology and project tools: Jira, Confluence, SharePoint. Working knowledge in Docker Container/Kubernetes Excellent team player, ability to work independently and as part of a team Experience in mentoring junior developers and providing technical leadership Familiarity with Monitoring & Reporting tools (Prometheus, Grafana, PagerDuty etc). Ability to learn, understand, and work quickly with new emerging technologies, methodologies, and solutions in the Cloud/IT technology space Knowledge of front-end framework using React or Angular and any other programming languages like JavaScript/TypeScript or Python is a plus
Posted 1 month ago
5.0 - 8.0 years
17 - 22 Lacs
Gurugram
Work from Office
We're looking for an experienced and driven DevOps Lead to join our tech team. This role requires both strong technical expertise and leadership skills. You will manage our AWS cloud infrastructure, build efficient CI/CD pipelines, improve system reliability, and guide best practices across DevOps processes. Role & Responsibilities Cloud Infrastructure : Design and manage secure, scalable AWS infrastructure (ECS, EC2, EKS, RDS, S3, IAM, Lambda, etc.) using IaC tools like Terraform, CloudFormation, or AWS CDK. CI/CD Pipelines : Build and maintain automated pipelines (Bitbucket Actions, Jenkins), integrating code analysis and testing. Monitoring & Observability : Set up monitoring and alerts using Grafana, ELK, Prometheus, New Relic, etc. Automation & Deployment : Automate deployments and maintenance with GitOps workflows and efficient scaling mechanisms. Security & Compliance : Manage secrets with AWS Secrets Manager/SOPS; ensure compliance (ISO 27001, IT Act) with tools like Trivy, Aqua, Prisma Cloud, Terratest, or Checkov. Serverless Architecture : Use Lambda, API Gateway, and frameworks like SAM for serverless deployments. Cost Optimization : Optimize AWS costs using tools like CloudHealth, CloudKeeper, or AWS Cost Explorer. Reliability & Incident Management : Lead incident response, RCA, and performance tuning via PagerDuty, Opsgenie. Collaboration & Leadership : Partner with engineering, QA, product, and security teams. Mentor juniors and promote DevOps culture. Documentation : Maintain clear documentation using Confluence, Notion, or internal wikis. Must-Have Skills B.Tech 5-8 Years of experience Strong AWS experience with infrastructure design Proficient in IaC (Terraform, CloudFormation, AWS CDK) Skilled in Docker, Kubernetes/EKS Scripting (Bash, Python, or Go) Good grasp of networking, security, and Linux systems Proven CI/CD implementation experience and Agile delivery exposure Proactive and collaborative mindset with strong decision-making skills and ownership Good-to-Have Skills AWS Certifications (e.g., DevOps Engineer, SA) Familiarity with compliance standards (ISO 27001, IT Act) Experience in cloud cost optimization and performance tuning
Posted 1 month ago
4.0 - 6.0 years
3 - 7 Lacs
Hyderabad
Work from Office
What you will do You will play a key role as part of Operations Generative AI (GenAI) Product team to deliver cutting edge innovative GEN AI solutions across various Process Development functions (Drug Substance, Drug Product, Attribute Sciences & Combination Products) in Operations functions. The role involves developing, implementing and sustaining GEN AI solutions to help find relevant, actionable information quickly and accurately. Role Description: The Specialist Software Engineer is responsible for designing, developing, and maintaining GEN AI solutions software applications and solutions that meet business needs and ensure high availability and performance of critical systems and applications in Process development under Operation. This role involves working closely with Data Scientists, business SMEs, and other engineers to create high-quality, scalable GEN AI software solutions to help find relevant, actionable information quickly and accurately, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Take ownership of complex software projects from conception to deployment, Manage software delivery scope, risk, and timeline. Rapidly prototype concepts into working code. Provide technical guidance and mentorship to junior developers. Contribute to front-end and back-end development using cloud technology. Develop innovative solutions using generative AI technologies. Integrate with other systems and platforms to ensure seamless data flow and functionality. Conduct code reviews to ensure code quality and adherence to best practices. Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations. Analyze and understand the functional and technical requirements of applications, solutions, and systems and translate them into software architecture and design specifications. Work closely with product team, cross-functional teams, enterprise technology teams and QA, to deliver high-quality and compliant software on time. Ensure high quality software deliverables free of bugs and performance issues through proper design and comprehensive testing strategies. Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently. Architect and lead the development of scalable, intelligent search systems leveraging NLP, embeddings, LLMs, and vector search Own the end-to-end lifecycle of search solutions, from ingestion and indexing to ranking, relevancy tuning, and UI integration Integrate AI models that improve search precision, query understanding, and result summarization (e.g., generative answers via LLMs). Develop solutions for handling structured/unstructured data in AI pipelines. Partner with platform teams to deploy search solutions on scalable infrastructure (e.g., Kubernetes, Databricks). Experience in integrating Generative AI capabilities and Vision Models to enrich content quality and user engagement. Basic Qualifications: Masters degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field Experience in Python, Java, AI/ML based Python libraries(PyTorch), Experienced with Web frameworks like Flask, Django, Fast API Experience with design patterns, data structures, data modelling, data algorithms Familiarity with MLOps, CI/CD for ML, and monitoring of AI models in production. Experienced with AWS /Azure Platform, building and deploying the code Experience in PostgreSQL /Mongo DB SQL database, vector database for large language models, Databricks or RDS, S3 Buckets Experience with popular large language models Experience with Retrieval-augmented generation (RAG) framework, AI Agents, Vector stores, AI/ML platforms, embedding models ex Open AI, Langchain, Redis, pgvector Experience with prompt engineering, model fine tuning Experience with generative AI or retrieval-augmented generation (RAG) frameworks Experience in Agile software development methodologies Experience in End-to-End testing as part of Test-Driven Development Preferred Qualifications: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk). Experience with data processing tools like Hadoop, Spark, or similar. Experience with Langchain or llamaIndex framework for language models; Experience with prompt engineering, model fine-tuning. Experience working on Full stack Applications Professional Certifications: AWS, Data Science Certifications(preferred) Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, virtual teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills.
Posted 1 month ago
8.0 - 13.0 years
15 - 30 Lacs
Noida, Greater Noida
Work from Office
Site & Platform Reliability Engineer Location: Noida/Greater Noida Organization: TetrahedInc. Experience: 8+ Years Work Mode: [Onsite] Employment Type: Fulltime About TetrahedInc. TetrahedInc. is a privately held IT services and consulting firm headquartered in Hyderabad with a strong global staffing presence ambition box. They specialize in end-to-end digital transformation offering cloud computing, AI/ML, cybersecurity, data analytics, and recruitment/staffing solutions to diverse industries worldwide tetrahed.com. About the Role As a Site & Platform Reliability Engineer at TetrahedInc., you'll be responsible for designing, automating, and operating cloud-native platforms using SRE/PRE best practices. You'll be a technical leader, engaging with clients, mentoring teams, and collaborating with major cloud and open-source ecosystems (e.g., Kubernetes, CNCF). Key Responsibilities Technical & Architectural Leadership: Lead PoCs, architecture design, SRE kick start, observability, and platform modernization efforts. Engineer scalable, resilient cloud-native systems. Partner with cloud providers like Google, AWS, Microsoft, Red Hat, and VMware. Service Delivery & Automation: Implement SRE principles, automation, infrastructure-as-code (Terraform, Ansible), and CI/CD pipelines (ArgoCD, Jenkins, Tekton). Define SLOs/SLIs, perform incident management, and ensure reliability. Coach internal and client delivery teams in reliability practices. Innovation & Thought Leadership: Contribute to open-source communities or internal knowledge-sharing. Author whitepapers, blogs, or speak at industry events. Maintain hands-on technical excellence and mentor peers. Client Engagement & Trust: Conduct workshops, briefings, and strategic discussions with stakeholders. Act as a trusted advisor during modernization journeys. Mandatory Skills & Experience Proficiency in Kubernetes (Open Shift, Tanzu, or vanilla). Strong SRE knowledge, infrastructure-as-code, and automation scripting (Python, Bash, YAML). Experience with CI/CD pipeline tools (ArgoCD, Jenkins, Tekton). Deep observability experience (Prometheus, ELK/EFK, Grafana, App Dynamics, Dyna-trace). Familiarity with cloud-native networking (DNS, load balancers, reverse proxies). Expertise in micro services and container-based architectures. Excellent communication and stakeholder management. Preferred Qualifications Bachelors/Masters in Computer Science or Engineering. CKA certification or equivalent Kubernetes expertise. 8+years in SI, consulting, or enterprise organizations. Familiarity with Agile/Scrum/Domain-driven design, CNCF ecosystem. Passionate about innovation, labs environment, and open-source. Why Join Tetrahed? Engage with global clients and cloud hyperscalers. Drive opensource and SRE best practices. Contribute to a learning-rich, collaborative environment. Make an impact within a growing, innovative mid-size IT organization. Interested Candidates Lets Connect! Please share your updated CV or reach out directly: Email : manojkumar@tetrahed.com Mobile : +91-6309124068 LinkedIn (Manoj Kumar) : https://www.linkedin.com/in/manoj-kumar-54455024b/ Company Page : https://www.linkedin.com/company/tetrahedinc/
Posted 1 month ago
7.0 - 9.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 9+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad
Work from Office
Grade Level (for internal use): 10 Market Intelligence The Role: Senior Full Stack Developer Grade level :10 The Team: You will work with a team of intelligent, ambitious, and hard-working software professionals. The team is responsible for the architecture, design, development, quality, and maintenance of the next-generation financial data web platform. Other responsibilities include transforming product requirements into technical design and implementation. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts, and Infrastructure Teams The Impact: Market Intelligence is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. Whats in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of dataideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies including AWS Cloud , EMR and Apache NiFi . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages and tools, including unit testing, performance testing and monitoring and implementation Support business and technology teams as necessary during design, development and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk, Spark ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially written with the business and other technical groups What Were Looking For: Basic Qualifications: BachelorsMasters Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Advance SQL programming skills Preferred experience or familiarity with tools and technologies such as Odata, Grafana, Kibana, Big Data platforms, Apache Kafka, GitHub, AWS EMR, Terraform, and emerging areas like AI/ML and GitHub Copilot. Highly recommended skillset in Databricks, SPARK, Scalatechnologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Spark, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing. Benefits: Health & Wellness: Health care coverage designed for the mind and body. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awardssmall perks can make a big difference.
Posted 1 month ago
1.0 - 3.0 years
3 - 5 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will be responsible for developing, and maintaining software applications, components, and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role requires a experience in and a deep understanding of both front and back-end development. The Full Stack Software Engineer will work closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. The Full Stack Software Engineer will also contribute to design discussions and provide guidance on technical feasibility and best standards. Roles & Responsibilities: Develop complex software projects from conception to deployment, including delivery scope, risk, and timeline. Conduct code reviews to ensure code quality and adherence to best practices. Contribute to both front-end and back-end development using cloud technology. Provide ongoing support and maintenance for design system and applications, ensuring reliability, reuse and scalability while meeting accessibility and best standards. Develop innovative solutions using generative AI technologies. Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations. Identify and resolve technical challenges, software bugs and performance issues effectively. Stay updated with the latest trends and advancements. Analyze and understand the functional and technical requirements of applications, solutions, and systems and translate them into software architecture and design specifications. Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software. Work closely with cross-functional teams, including product management, stakeholders, design, and QA, to deliver high-quality software on time. Maintain detailed documentation of software designs, code, and development processes. Work on integrating with other systems and platforms to ensure seamless data flow and functionality. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 1 to 3 years of experience in Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of experience in Computer Science, IT or related field experience OR Diploma and 7 to 9 years of experience in Computer Science, IT or related field experience Must-Have Skills: Hands-on experience with various cloud services, understanding the pros and cons of various cloud services in well-architected cloud design principles. Experience with developing and maintaining design systems across teams. Hands-on experience with Full Stack software development. Proficient in programming languages such as JavaScript, Python, SQL/NoSQL. Familiarity with frameworks such as React JS visualization libraries. Strong problem-solving and analytical skills; ability to learn quickly; excellent communication and interpersonal skills. Experience with API integration, serverless, microservices architecture. Experience in SQL/NoSQL databases, vector databases for large language models. Experience with website development, understanding of website localization processes, which involve adapting content to fit cultural and linguistic contexts. Preferred Qualifications: Good-to-Have Skills: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk). Experience with data processing tools like Hadoop, Spark, or similar. Experience with popular large language models. Experience with Langchain or llamaIndex framework for language models; experience with prompt engineering, model fine-tuning. Professional Certifications: Relevant certifications such as CISSP, AWS Developer certification, CompTIA Network+, or MCSE (preferred). Any SAFe Agile certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, virtual teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills.
Posted 1 month ago
6.0 - 11.0 years
20 - 25 Lacs
Hyderabad, Ahmedabad
Hybrid
Hi Aspirant, Greetings from TechBlocks - IT Software of Global Digital Product Development - Hyderabad !!! About us : TechBlocks is a global digital product engineering company with 16+ years of experience helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. From cloud solutions and data engineering to experience design and platform modernization, we help businesses solve complex challenges and unlock new growth opportunities. Job Title: Senior DevOps Site Reliability Engineer (SRE) Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - 3 Days from office Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary: As a Senior SRE, you will ensure platform reliability, incident management, and performance optimization. You'll define SLIs/SLOs, contribute to robust observability practices, and drive proactive reliability engineering across services. Experience Required: 610 years of SRE or infrastructure engineering experience in cloud-native environments. Mandatory: Cloud : GCP (GKE, Load Balancing, VPN, IAM) Observability: Prometheus, Grafana, ELK, Datadog Containers & Orchestration : Kubernetes, Docker Incident Management: On-call, RCA, SLIs/SLOs IaC : Terraform, Helm Incident Tools: PagerDuty, OpsGenie Nice to Have : GCP Monitoring, Skywalking Service Mesh, API Gateway GCP Spanner, Scope: Drive operational excellence and platform resilience Reduce MTTR, increase service availability Own incident and RCA processes Roles and Responsibilities: Define and measure Service Level Indicators (SLIs), Service Level Objectives ( SLOs), and manage error budgets across services. Lead incident management for critical production issues drive Root Cause Analysis (RCA) and postmortems. Create and maintain runbooks and standard operating procedures for high availability services. Design and implement observability frameworks using ELK, Prometheus, and Grafana ; drive telemetry adoption. Coordinate cross-functional war-room sessions during major incidents and maintain response logs. Develop and improve automated System Recovery, Alert Suppression, and Escalation logic. Use GCP tools like GKE, Cloud Monitoring, and Cloud Armor to improve performance and security posture. Collaborate with DevOps and Infrastructure teams to build highly available and scalable systems. Analyze performance metrics and conduct regular reliability reviews with engineering leads. Participate in capacity planning, failover testing, and resilience architecture reviews. If you are interested , then please share me your updated resume to kranthikt@tblocks.com Warm Regards, Kranthi Kumar kranthikt@tblocks.com Contact: 8522804902 Senior Talent Acquisition Specialist Toronto | Ahmedabad | Hyderabad | Pune www.tblocks.com
Posted 1 month ago
12.0 - 19.0 years
17 - 30 Lacs
Hyderabad, Ahmedabad
Hybrid
Job Title: Release Manager Tools & Infrastructure Location: Ahmedabad & Hyderabad Experience Level: 12 + years Department: Engineering / Devops Reporting To: Head of Devops / Engineering Director Were looking for a hands-on Release Manager with strong Devops and Infrastructure expertise to lead software release pipelines, tooling, and automation across distributed systems. This role ensures secure, stable, and timely delivery of applications while coordinating across engineering, QA, and SRE teams. Key Responsibilities Release & Environment Management Plan and manage release schedules and cutovers Oversee environment readiness, rollback strategies, and post-deployment validations Ensure version control, CI/CD artifact management, and build integrity Toolchain Ownership Administer tools like Jenkins, GitHub Actions, Bitbucket, SonarQube, Argo CD, JFrog, and Terraform Manage Kubernetes and Helm for container orchestration Maintain secrets via Vault and related tools Infrastructure & Automation Work with Cloud & DevOps teams for secure, automated deployments Use GCP (GKE, VPC, IAM, Load Balancer, GCS) with IaC standards (Terraform, Helm) Monitoring & Stability Implement observability tools: Prometheus, Grafana, ELK, Datadog Monitor release health, manage incident responses, and improve via RCAs Compliance & Coordination Use Jira, Confluence, ServiceNow for planning and documentation Apply OWASP/WAF/GCP Cloud Armor standards Align releases with Dev, QA, CloudOps, and Security teams IF interested share resume to: sowmya.v@acesoftlabs.com
Posted 1 month ago
6.0 - 10.0 years
8 - 12 Lacs
Pune
Remote
What You'll Do We are looking for experienced Machine Learning Engineers with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities will span the entire development lifecycle, including conceptualization, prototyping and delivery of the LLM platform features. You will build core agent infrastructureA2A orchestration and MCP-driven tool discoveryso teams can launch secure, scalable agent workflows. You will be reporting to Senior Manager, Machine Learning What Your Responsibilities Will Be We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Promote innovation by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards, contributing to the project's. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, encouraging a culture of collaboration Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You'll Need to be Successful 6+ years of experience building Machine Learning models and deploying them in production environments as part of creating solutions to complex customer problems. Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. Experience working with technological innovations in AI & ML(esp. GenAI) and apply them. Experience with design patterns and data structures. Good analytical, design and debugging skills. Technologies you will work with: Python, LLMs, Agents, A2A, MCP, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, and Grafana.
Posted 1 month ago
5.0 - 8.0 years
6 - 9 Lacs
Pune
Remote
What You'll Do We are looking for experienced Machine Learning Engineers with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities will span the entire development lifecycle, including conceptualization, prototyping and delivery of the LLM platform features. You will have a blend of technical skills in the fields of AI & Machine Learning especially with LLMs and a deep-seated understanding of software development practices where you'll work with a team to ensure our systems are scalable, performant and accurate. You will be reporting to Senior Manager, AI/ML. What Your Responsibilities Will Be We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Inspire creativity by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, encouraging a culture of collaboration. Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You'll Need to be Successful Bachelor's/Master's degree in computer science with 5+ years of industry experience in software development, along with experience building Machine Learning models and deploying them in production environments. Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. Work with technological innovations in AI & ML(esp. GenAI) Experience with design patterns and data structures. Good analytical, design and debugging skills. Technologies you will work with: Python, LLMs, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, Grafana
Posted 1 month ago
5.0 - 8.0 years
0 Lacs
Noida
Work from Office
Senior Full Stack Engineer We are seeking a Senior Full Stack Engineer to design, build and scale a portfolio of cloud-native products including real-time speech-assessment tools, GenAI content services, and analytics dashboards used by customers worldwide. You will own end-to-end delivery across React/Next.js front-ends, Node/Python micro-services, and a MongoDB-centric data layer, all orchestrated in containers on Kubernetes, while championing multi-tenant SaaS best practices and modern MLOps. Role: Product & Architecture • Design multi-tenant SaaS services with isolated data planes, usage metering, and scalable tenancy patterns. • Lead MERN-driven feature work: SSR/ISR dashboards in Next.js, REST/GraphQL APIs in Node.js or FastAPI, and event-driven pipelines for AI services. • Build and integrate AI/ML & GenAI modules (speech scoring, LLM-based content generation, predictive analytics) into customer-facing workflows. DevOps & Scale • Containerise services with Docker, automate deployment via Helm/Kubernetes, and implement blue-green or canary roll-outs in CI/CD. • Establish observability for latency, throughput, model inference time, and cost-per-tenant across micro-services and ML workloads. Leadership & Collaboration • Conduct architecture reviews, mentor engineers, and promote a culture that pairs AI-generated code with rigorous human code review. • Partner with Product and Data teams to align technical designs with measurable business KPIs for AI-driven products. Required Skills & Experience • Front-End React 18, Next.js 14, TypeScript, modern CSS/Tailwind • Back-End Node 20 (Express/Nest) and Python 3.11 (FastAPI) • Databases MongoDB Atlas, aggregation pipelines, TTL/compound indexes • AI / GenAI Practical ML model integration, REST/streaming inference, prompt engineering, model fine-tuning workflows • Containerisation & Cloud Docker, Kubernetes, Helm, Terraform; production experience on AWS/GCP/Azure • SaaS at Scale Multi-tenant data isolation, per-tenant metering & rate-limits, SLA design • CI/CD & Quality GitHub Actions/GitLab CI, unit + integration testing (Jest, Pytest), E2E testing (Playwright/Cypress) Preferred Candidate Profile • Production experience with speech analytics or audio ML pipelines. • Familiarity with LLMOps (vector DBs, retrieval-augmented generation). • Terraform-driven multi-cloud deployments or FinOps optimization. • OSS contributions in MERN, Kubernetes, or AI libraries. Tech Stack & Tooling - React 18 • Next.js 14 • Node 20 • FastAPI • MongoDB Atlas • Redis • Docker • Kubernetes • Helm • Terraform • GitHub Actions • Prometheus + Grafana • OpenTelemetry • Python/Rust micro-services for ML inference
Posted 1 month ago
1.0 - 3.0 years
3 - 7 Lacs
Thane
Work from Office
Role & responsibilities : Deploy, configure, and manage infrastructure across cloud platforms like AWS, Azure, and GCP. Automate provisioning and configuration using tools such as Terraform. Design and maintain CI/CD pipelines using Jenkins, GitLab CI, or CircleCI to streamline deployments. Build, manage, and deploy containerized applications using Docker and Kubernetes. Set up and manage monitoring systems like Prometheus and Grafana to ensure performance and reliability. Write scripts in Bash or Python to automate routine tasks and improve system efficiency. Collaborate with development and operations teams to support deployments and troubleshoot issues. Investigate and resolve technical incidents, performing root cause analysis and implementing fixes. Apply security best practices across infrastructure and deployment workflows. Maintain documentation for systems, configurations, and processes to support team collaboration. Continuously explore and adopt new tools and practices to improve DevOps workflows.
Posted 1 month ago
3.0 - 8.0 years
12 - 16 Lacs
Noida, Uttarpradesh
Work from Office
About the Role: Grade Level (for internal use): 10 The Team The T echOps team is responsible for providing high quality Technical Support across a wide suite of products within PVR business segment. The TechOps team works closely with a highly competent Client Services team and the core project teams to resolve client issues and improve the platform. Our work helps ensure that all products are provid ed a high - quality service and maintaining client satisfaction . The team is responsible for owning and maintaining our cloud hosted apps. The Impact The role is an extremely critical role to help affect positive client experience by virtue of maintaining high availability of business-critical services/ applications. Whats in it for you The role provides for successful candidate to have Opportunity to interact and engage with senior technology and operations users Work on latest in technology like AWS, Terraform, Splunk, Grafana etc . Work in an environment which allows for complete ownership and scalability Exposure to learn about industry leading OTC Derivatives and Pricing products Responsibilities We are looking for a seasoned TechOps /app Support Engineer who has experience The person would require delivering all aspects of production support for various applications along with some of 3rd party vendor products including incident resolution or proactive mitigation, change & problem management; would ensure compliance to all agreed SLAs and requirements managing and maintaining cloud infrastructure performing a variety of tasks including coordinating all resources and stakeholders, planning and setting milestones, assigning responsibilities and monitoring, summarizing and communicating progress and status mentoring and developing junior members of the team ability to act as SME and ability to independently handle client issues and incident escalations What Were Looking For someone with 7 + (Grade 10) years of Application Support/TechOps experience with team management experience of over 3 years having below skills Knowledge of cloud technologies like AWS, Containerization , CI/CD, GitHub/Gitlab, Cloud networking - Must Oracle, PL/SQL Query, Linux/Unix Shell Scripting (or Python) , Java Must Keen problem solver with analytical nature and excellent problem - solving skillset Open minded, flexible and willing to adapt to changing situations Be able to work flexible hours including some weekends and possibly public holidays to meet work demands and project deadlines Excellent communication skills, both written and verbal, in the English language with ability to represent complex technical issue s/concepts to non-tech stakeholders
Posted 1 month ago
5.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Title : AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) Req ID: 325686 We are currently seeking a AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) to join our team in Bangalore, Karntaka (IN-KA), India (IN). Minimum Experience on Key Skills - 5 to 10 years Skills: AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) We looking for operational engineer who is ready to work on weekends for oncall as primary criteria. Skills we look for AWS cloud (SQS, SNS, , DynomoDB, EKS), SQL (postgress, cassendra), snowflake, ControlM/Autosys/Airflow, ServiceNow, Datadog, Splunk, Grafana, python/shell scripting.
Posted 1 month ago
5.0 - 10.0 years
13 - 17 Lacs
Pune
Work from Office
Req ID: 301172 We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in Pune, Mahrshtra (IN-MH), India (IN). Location - Remote AWS Lead Engineer will be required to design and build the cloud foundations platform. Translating project-specific needs into a cloud structure, design the cloud environment when required that covers all requirements with appropriate weightage given to the security aspect. Carryout deployment and integration of application in the designed cloud environment. Understand needs of business / client and implement cloud strategies those meet the needs. The candidate will also need good experience around software development principles, IaC and Github as devops tooling. Provide the necessary design to the team for building cloud infrastructure solutions, train and guide the team in provisioning/using/integrating the cloud services proposed in the design. Skills: Must have's 5+ years Proficient experience with AWS Cloud(AWS Core) 3+ years' relevant experience working on design cloud Infrastructure solution and cloud account migration Proficient in Cloud Networking and network configuration . Proficient in Terraform for managing Infrastructure as code (module based provisioning of infra, connectivity, provisioning of data services, monitoring services) Proficient in Github and Implementing CI/CD for infrastructure using IaC with Github Actions. AWS-CLI Have experience working with these AWS services: IAM Accounts, IAM Users & Groups, IAM Roles, Access Control RBAC, ABAC, Compute (EC2 and types and Costing), Storage (EBS, EFS,S3 etc), VPC, VPC Peering, Security Groups, Notification & Queue services, NACL, Auto Scaling Groups, CloudWatch, DNS, Application Load Balancer, Directory Services and Identity Federation, AWS Organizations and Control Tower, AWS Tagging Configuration, Certificate Management MVP Monitoring tool such as Amazon CloudWatch & hands-on with CloudWatch Logs. Examples of daily activities such as: - Account provisioning support - Policy Provisioning - Network support - Resource deployment support - Incident Support on daily work - Security Incident support DevOps experience o Github and github actions o Terraform o Python language o Go Language o Grafana o ArgoCD Nice to have's - Docker, - Kubernetes Able to work with Imperative and Declarative way to setup Kubernetes resources/Services
Posted 1 month ago
2.0 - 5.0 years
1 - 6 Lacs
Noida, Hyderabad
Work from Office
We are currently seeking a GCP Dev Ops Engr to join our team in Ban/Hyd/Chn/Gur/Noida, Karntaka (IN-KA), India (IN). Responsibilities Design, implement, and manage GCP infrastructure using Infrastructure as Code (IaC) tools. Develop and maintain CI/CD pipelines to improve development workflows. Monitor system performance and ensure high availability of cloud resources. Collaborate with development teams to streamline application deployments. Maintain security best practices and compliance across the cloud environment. Automate repetitive tasks to enhance operational efficiency. Troubleshoot and resolve infrastructure-related issues in a timely manner. Document procedures, policies, and configurations for the infrastructure. Skills Google Cloud Platform (GCP) Terraform Ansible CI/CD Kubernetes Docker Python Bash/Shell Scripting Monitoring tools (e.g., Prometheus, Grafana) Cloud Security Jenkins Git
Posted 1 month ago
4.0 - 7.0 years
5 - 9 Lacs
Noida
Work from Office
Proficiency in Go programming language (Golang). Solid understanding of RESTful API design and microservices architecture. Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Redis). Familiarity with container technologies (Docker, Kubernetes). Understanding of distributed systems and event-driven architecture. Version control with Git. Familiarity with CI/CD pipelines and cloud platforms (AWS, GCP, Azure). Experience with message brokers (Kafka, RabbitMQ). Knowledge of GraphQL. Exposure to performance tuning and profiling. Contributions to open-source projects or personal GitHub portfolio. Familiarity with monitoring tools (Prometheus, Grafana, ELK). Roles and Responsibilities Design, develop, and maintain backend services and APIs using Go (Golang). Write efficient, scalable, and reusable code. Collaborate with front-end developers, DevOps engineers, and product teams to deliver high-quality features. Optimize applications for performance and scalability. Develop unit and integration tests to ensure software quality. Implement security and data protection best practices. Troubleshoot and debug production issues. Participate in code reviews, architecture discussions, and continuous improvement processes.
Posted 1 month ago
8.0 - 12.0 years
30 - 35 Lacs
Pune, Chennai
Work from Office
Mandatory Skills SRE, DevOps, Scripting (Python/Bash/Perl), Automation Tools (Ansible/Terraform/Puppet), AWS Cloud, Docker, Kubernetes, Observability Tools (Prometheus/Grafana/ELK Stack/Splunk), CICD pipelines using GitLab Jenkins or similar tools Please share your resume to thulasidharan.b@ltimindtree.com Note: Only 0-30 days notice
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France