Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
punjab
On-site
As a skilled and versatile NodeJS & Python Engineer, you will play a crucial role in designing, developing, and maintaining robust server-side logic and APIs to support a suite of applications. Collaborating closely with front-end developers, cross-functional engineering teams, and product stakeholders, you will ensure smooth integration of user-facing features with back-end functionality. Leveraging the Serverless framework within an AWS Lambda environment is key to your responsibilities. Your expertise in NodeJS and proficiency in Python are essential for supporting and extending services written in both languages. Your adaptability and experience across multiple technologies will be pivotal in building scalable, high-performance, and secure applications for the global user base. Escalon, a rapidly growing company offering essential back-office services worldwide, relies on the engineering team to develop tools and platforms that drive success and scalability for both Escalon and its clients. Qualifications: - Bachelor's degree in Computer Science, Engineering, or related field, or 4+ years of enterprise software development experience. - Minimum 4 years of hands-on experience with NodeJS in the Serverless framework. - Professional experience in Python (2+ years) for back-end scripting and service development. - Solid grasp of object-oriented programming principles in JavaScript and Python. - Experience with AWS serverless environment, including Lambda, Fargate, S3, RDS, SQS, SNS, Kinesis, and Parameter Store. - Understanding of asynchronous programming patterns and challenges. - Knowledge of front-end technologies like HTML5 and templating systems. - Proficiency in designing and developing loosely coupled serverless applications and REST APIs. - Strong experience with SQL and database schema design. - Familiarity with service-oriented architecture (SOA) principles and microservices best practices. - Effective verbal and written communication skills. - Experience in modern software engineering practices such as version control (Git), CI/CD, unit testing, and agile development. - Strong analytical, problem-solving, and debugging skills. - Write reusable, testable, and efficient code in both NodeJS and Python. - Develop and maintain unit tests and automated testing coverage. - Integrate front-end elements with server-side logic in a Serverless architecture. - Design and implement low-latency, high-availability, and high-performance applications. - Ensure security, data protection, and adherence to compliance standards. - Build and consume RESTful APIs and microservices using AWS Lambda and related services. - Actively participate in code reviews, design discussions, and architecture planning. - Promote the use of quality open-source libraries, considering licensing and long-term support. - Leverage and enhance the existing CI/CD DevOps pipeline for code integration and deployment.,
Posted 1 day ago
7.0 - 11.0 years
0 Lacs
delhi
On-site
As a CBRE Software Senior Engineer, you will work under broad direction to supervise, develop, maintain, and enhance client systems. This role is part of the Software Engineering job function and requires successfully executing and monitoring system improvements to increase efficiency. Responsibilities: - Develop, maintain, enhance, and test client systems of moderate to high complexity. - Execute the full software development life cycle (SDLC) to build high-quality, innovative, and performing software. - Conduct thorough code reviews to ensure high-quality code. - Estimate technical efforts of agile sprint stories. - Implement performance-optimized solutions and improve the performance of existing systems. - Serve as the primary technical point of contact on client engagements. - Investigate and resolve complex data system and software issues in the production environment. - Design and implement strategic partner integrations. - Participate in the specification and design of new features at client or business request. - Evaluate new platforms, tools, and technologies. - Coach others to develop in-depth knowledge and expertise in most or all areas within the function. - Provide informal assistance such as technical guidance, code review, and training to coworkers. - Apply advanced knowledge to seek and develop new, better methods for accomplishing individual and department objectives. - Showcase expertise in your job discipline and in-depth knowledge of other job disciplines within the organization function. - Lead by example and model behaviors consistent with CBRE RISE values. - Anticipate potential objections and persuade others, often at senior levels and of divergent interest, to adopt a different point of view. - Impact the achievement of customer operational project or service objectives across multidiscipline teams. - Contribute to new products, processes, standards, and/or operational plans in support of achieving functional goals. - Communicate difficult and complex ideas with the ability to influence. Qualifications: - Bachelor's Degree preferred with 7-9 years of relevant experience. In lieu of a degree, a combination of experience and education will be considered. - Knowledge of Java, Spring Boot, VueJS, Unit Testing, AWS services (ECS, Fargate, Lambda, RDS, S3, Step Functions), Bootstrap/CSS/CSS3, Docker, Dynamo DB, JavaScript/jQuery, Microservices, SNS, SpringBoot, and SQS. - Optional knowledge of .NET, Python, Angular, SQL Server, AppD, New Relic. - Innovative mentality to develop methods that go beyond existing solutions. - Ability to solve unique problems using standard and innovative solutions with a broad impact on the business. - Expert organizational skills with an advanced inquisitive mindset. Required Skills: - Angular - AWS API Gateway - AWS CloudFormation - AWS Lambda - AWS RDS - AWS S3 - AWS Step Functions - Bootstrap/CSS/CSS3 - Docker - Dynamo DB - Java - JavaScript/jQuery - Microservices - SNS - SpringBoot - SQS,
Posted 2 days ago
5.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
It&aposs fun to work in a company where people truly BELIEVE in what they&aposre doing! We&aposre committed to bringing passion and customer focus to the business. Job Description This role requires working from our local Hyderabad office 2-3x a week. WHAT YOULL DO: Work with software development teams to enable reliable and stable software services Develop software solutions that enhance the reliability and performance of services Optimize software release and deployment of ABC systems and cloud infrastructure in AWS Be an advocate for availability, reliability and scalability practices Help teams define and adhere to Service Level Objectives and adhere to standard processes Enable the product engineering teams through support of the automated deployment pipelines Collaborate with product development as an advocate for scalable architectural approaches Advocate for infrastructure and application security practices in the development process Respond to production incidents in a balanced and compensated rotation with other SREs and Senior Engineers You will lead a culture of learning and continuous improvement through incident postmortems and retrospectives WHAT YOULL NEED: 5+ years of demonstrable experience in our tech stack Proficiency in one programming language: Go, PHP, NodeJS or Java Infrastructure running 100% in AWS Service oriented architecture deployed on ECS Fargate & Lambda Databases: MySQL, Postgres, MongoDB, DynamoDB, Redshift Infrastructure automation with Terraform Observability & Monitoring: Honeycomb, NewRelic, CloudWatch, Grafana CI/CD pipelines with GitHub, CircleCI and Jenkins Willing to be part of a rotating on-call schedule Open to irregular work hours to support teams in different time zones WHATS IN IT FOR YOU: Purpose led company with a Values focused culture Best Life, One Team, Growth Mindset Time Off competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. Were committed to diversity and passion, and encourage you to apply, even if you dont demonstrate all the listed skillsets! ABCS COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each persons diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). ? If you like wild growth and working with happy, enthusiastic over-achievers, you&aposll enjoy your career with us! Show more Show less
Posted 2 days ago
10.0 - 14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role - Cloud Architect Analytics & Data Products Were looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning , application/API hosting , and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins , AWS Code Pipeline , Code Build , or GitHub Actions . Deploy and host internal tools, APIs, and applications using ECS , EKS , Lambda , API Gateway , and ELB . Provision and support analytics and data platforms using S3 , Glue , Redshift , Athena , Lake Formation , and orchestration tools like Step Functions or Apache Airflow (MWAA) . Implement cloud security, networking, and compliance using IAM , VPC , KMS , CloudWatch , CloudTrail , and AWS Config . Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock , Sage Maker , or integrations with APIs like Open AI . Requirements 10-14 years of experience in cloud engineering, DevOps, or cloud architecture roles. Strong hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python , Bash ) and infrastructure automation. Experience deploying containerized workloads using Docker , ECS , EKS , or Fargate . Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect , DevOps Engineer ) are preferred. Show more Show less
Posted 2 days ago
4.0 - 6.0 years
5 - 20 Lacs
Pune, Maharashtra, India
On-site
Job Summary we are looking for an experienced aws cloud engineer with a strong devops and scripting background to support, optimize, and automate cloud infrastructure in a dynamic fintech environment. the ideal candidate should have hands-on expertise in aws services, containerization, monitoring tools, and deployment automation. Key Responsibilities design and implement secure, scalable infrastructure solutions on aws deploy applications using ec2, lambda, fargate, ecs, and ecr automate infrastructure with cloud formation, python/bash scripting, and aws sdk manage and optimize relational and nosql databases (postgresql, sql, mongodb) set up monitoring using grafana, prometheus, and aws cloudwatch implement cost-saving strategies and optimize aws usage ensure high availability, security, and disaster recovery compliance manage containerization and orchestration using kubernetes (eks) build and maintain ci/cd pipelines for efficient deployments collaborate with product and development teams on devops planning and execution
Posted 1 week ago
5.0 - 13.0 years
0 Lacs
pune, maharashtra
On-site
We are seeking talented and experienced individuals to join our engineering team in the roles of Staff Development Engineer and Senior Software Development Engineer (SDE 3). As a member of our team, you will be responsible for taking ownership of complex projects, designing and constructing high-performance, scalable systems. In the role of SDE 3, you will play a crucial part in ensuring that the solutions we develop are not only robust but also efficient. This is a hands-on position that requires you to lead projects from concept to deployment, ensuring the delivery of top-notch, production-ready code. Given the fast-paced environment, strong problem-solving skills and a dedication to crafting exceptional software are indispensable. Your responsibilities will include: - Developing high-quality, secure, and scalable enterprise-grade backend components in alignment with technical requirements specifications and design artifacts within the expected time and budget. - Demonstrating a proficient understanding of the choice of technology and its application, supported by thorough research. - Identifying, troubleshooting, and ensuring the timely resolution of software defects. - Participating in functional specification, design, and code reviews. - Adhering to established practices for the development and upkeep of application code. - Taking an active role in diminishing the technical debt across our various codebases. We are looking for candidates with the following qualifications: - Proficiency in Python programming and frameworks such as Flask/FastAPI. - Prior experience in constructing REST API-based microservices. - Excellent knowledge and hands-on experience with RDBMS (e.g., MySQL, PostgreSQL), message brokers, caching, and queueing systems. - Preference for experience with NoSQL databases. - Ability for Research & Development to explore new topics and use cases. - Hands-on experience with AWS services like EC2, SQS, Fargate, Lambda, and S3. - Knowledge of Docker for application containerization. - Cybersecurity knowledge is considered advantageous. - Strong technical background with the ability to swiftly adapt to emerging technologies. - Desired experience: 5-13 years in Software Engineering for Staff or SDE 3 roles. Working Conditions: This role necessitates full-time office-based work; remote work arrangements are not available. Company Culture: At Fortinet, we uphold a culture of innovation, collaboration, and continuous learning. We are dedicated to fostering an inclusive environment where every employee is valued and respected. We encourage applications from individuals of all backgrounds and identities. Our competitive Total Rewards package is designed to assist you in managing your overall health and financial well-being. We also offer flexible work arrangements and a supportive work environment. If you are looking for a challenging, fulfilling, and rewarding career journey, we invite you to contemplate joining us and contributing solutions that have a meaningful and enduring impact on our 660,000+ global customers.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
surat, gujarat
On-site
At devx, we specialize in assisting some of India's most innovative brands in unlocking growth opportunities through AI-powered and cloud-native solutions developed in collaboration with AWS. As a rapidly growing consultancy, we are dedicated to addressing real-world business challenges by leveraging cutting-edge technology. We are currently seeking a proactive and customer-focused AWS Solutions Architect to become part of our team. In this position, you will collaborate directly with clients to craft scalable, secure, and economical cloud architectures that effectively tackle significant business obstacles. Your role will involve bridging the gap between business requirements and technical implementation, establishing yourself as a reliable advisor to our clients. Key Responsibilities - Engage with clients to comprehend their business goals and transform them into cloud-based architectural solutions. - Design, deploy, and document AWS architectures, emphasizing scalability, security, and performance. - Develop solution blueprints and collaborate closely with engineering teams to ensure successful execution. - Conduct workshops, presentations, and in-depth technical discussions with client teams. - Stay informed about the latest AWS offerings and best practices, integrating them into solution designs. - Collaborate with various teams, including sales, product, and engineering, to provide comprehensive solutions. We are looking for individuals with the following qualifications: - Minimum of 2 years of experience in designing and implementing solutions on AWS. - Proficiency in fundamental AWS services like EC2, S3, Lambda, RDS, API Gateway, IAM, VPC, etc. - Proficiency in fundamental AI/ML and Data services such as Bedrock, Sagemaker, Glue, Athena, Kinesis, etc. - Proficiency in fundamental DevOps services such as ECS, EKS, CI/CD Pipeline, Fargate, Lambda, etc. - Excellent written and verbal communication skills in English. - Comfortable in client-facing capacities, with the ability to lead technical dialogues and establish credibility with stakeholders. - Capability to strike a balance between technical intricacies and business context, effectively communicating value to decision-makers. Location: Surat, Gujarat Note: This is an on-site role in Surat, Gujarat. Please apply only if you are willing to relocate.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You should have proven experience with Java and Spring Boot. Your proficiency in AWS, particularly in Lambda and EKS, should be strong. Understanding microservices architecture is essential for this role. Hands-on experience with Unit testing using Junit & Mockito is required. It would be good to have experience working with AWS services like RDS, Fargate, or related technologies. Familiarity with CI/CD practices and tooling such as GitHub Actions and automated testing pipelines would also be beneficial.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer & Community Banking Team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. You will execute creative software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. You will create secure and high-quality production code and maintain algorithms that run synchronously with appropriate systems. Additionally, you will produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development. You will gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Furthermore, you will build Microservices that will run on the bank's internal cloud and the public cloud platform (AWS) and collaborate with teams in multiple regions and time zones. You will also participate in scrum team stand-ups, code reviews, and other ceremonies, contributing to task completion and blocker resolution within your team. **Job Responsibilities:** - Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems. - Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems. - Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development. - Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. - Builds Microservices that will run on the bank's internal cloud and the public cloud platform (AWS). - Collaborates with teams in multiple regions and time zones. - Participates in scrum team stand-ups, code reviews, and other ceremonies, contributing to task completion and blocker resolution within your team. **Required qualifications, capabilities, and skills:** - Formal training or certification on software engineering concepts and 3+ years applied experience in Java, AWS, Terraforms. - Experience with technologies like Java 11/17, Spring/Spring Boot, Kafka, Relational/Non-Relational Databases like (Oracle, Cassandra, Dynamo, Postgres). - Minimum 3 years of hands-on experience on Public Cloud platform using AWS for building secure Microservices. - Hands-on experience on AWS services like EKS, Fargate, SQS/SNS/Eventbridge, Lambda, S3, EBS, Dynamo/Arora Postgres DB, to name a few. - Hands-on experience of Terraform scripts. - Experience with DevOps concepts for automated build and deployment. - Overall knowledge of the Software Development Life Cycle. - Bachelor's Degree in Computer Science, Software Engineering, Computer Engineering, or related field or equivalent relevant work experience. **Preferred qualifications, capabilities, and skills:** - AWS Developer/Solution Architect Certification. - Understanding of Design Patterns for building Microservices and Database designs. - Knowledge of tools like JMeter, Dynatrace, Splunk, Wiremock, Jenkins, Spinnaker.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be responsible for mentoring and guiding a team in the role of Technical Lead. In addition, you will support the Technical Architect in designing and exploring new services and modules. Your expertise should include hands-on experience in Java, SpringBoot, and various Spring modules such as Spring MVC, Spring JPA, and Spring Actuators. Furthermore, you must have practical experience with various AWS services including EC2, S3, Lambda, API Gateway, EKS, RDS, Fargate, and CloudFormation. Proficiency in Microservices based architecture and RESTful web services is essential for this role. The ideal candidate will have a notice period of 15 days or less, with preference given to immediate joiners.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
You should have expert-level proficiency in Python and Python frameworks or Java. You must have hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, CloudFormation, Docker, ECS, Fargate, and ECR. Deep experience with key AWS services like Compute (PySpark, Lambda, ECS), Storage (S3), Databases (DynamoDB, Snowflake), Networking (VPC, 53, CloudFront, API Gateway), DevOps/CI-CD (CloudFormation, CDK), Security (IAM, KMS, Secrets Manager), Monitoring (CloudWatch, X-Ray, CloudTrail), and NoSQL Databases like Cassandra, PostGreSQL is required. You should have very strong hands-on knowledge of using Python for integrations between systems through different data formats. Expertise in deploying and maintaining applications in AWS, along with hands-on experience in Kinesis streams and Auto-scaling, is essential. Designing and implementing distributed systems and microservices, and following best practices for scalability, high availability, and fault tolerance are key responsibilities. Strong problem-solving and debugging skills are necessary for this role. You should also have the ability to lead technical discussions and mentor junior engineers. Excellent written and verbal communication skills are a must. Comfort working in agile teams with modern development practices and collaborating with business and other teams to understand business requirements and work on project deliverables is expected. Participation in requirements gathering, understanding, designing a solution based on available framework and code, and experience with data engineering tools or ML platforms (e.g., Pandas, Airflow, SageMaker) are required. An AWS certification such as AWS Certified Solutions Architect or Developer is preferred. This position is based in multiple locations including Indore, Mumbai, Noida, Bangalore, Chennai in India. Qualifications: - Bachelor's degree or foreign equivalent required from an accredited institution. Consideration will be given to three years of progressive experience in the specialty in lieu of every year of education. - At least 8+ years of Information Technology experience.,
Posted 2 weeks ago
7.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project.
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
You should possess expert-level proficiency in Python and Python frameworks or Java. Additionally, you must have hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, CloudFormation, Docker, ECS, Fargate, and ECR. Your deep experience should cover key AWS services such as Compute (PySpark, Lambda, ECS), Storage (S3), Databases (DynamoDB, Snowflake), Networking (VPC, 53, CloudFront, API Gateway), DevOps/CI-CD (CloudFormation, CDK), Security (IAM, KMS, Secrets Manager), and Monitoring (CloudWatch, X-Ray, CloudTrail). Moreover, you should be proficient in NoSQL Databases like Cassandra, PostgreSQL, and have strong hands-on knowledge of using Python for integrations between systems through different data formats. Your expertise should extend to deploying and maintaining applications in AWS, with hands-on experience in Kinesis streams and Auto-scaling. Designing and implementing distributed systems and microservices, scalability, high availability, and fault tolerance best practices are also key aspects of this role. You should have strong problem-solving and debugging skills, with the ability to lead technical discussions and mentor junior engineers. Excellent communication skills, both written and verbal, are essential. You should be comfortable working in agile teams with modern development practices, collaborating with business and other teams to understand business requirements and work on project deliverables. Participation in requirements gathering and understanding, designing solutions based on available frameworks and code, and experience with data engineering tools or ML platforms (e.g., Pandas, Airflow, SageMaker) are expected. An AWS certification (AWS Certified Solutions Architect or Developer) would be advantageous. This position is based in multiple locations in India, including Indore, Mumbai, Noida, Bangalore, and Chennai. To qualify, you should hold a Bachelor's degree or a foreign equivalent from an accredited institution. Alternatively, three years of progressive experience in the specialty can be considered in lieu of each year of education. A minimum of 8+ years of Information Technology experience is required for this role.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You are seeking a hands-on backend expert to elevate your FastAPI-based platform to the next level by developing production-grade model-inference services, agentic AI workflows, and seamless integration with third-party LLMs and NLP tooling. In this role, you will be responsible for various key areas: 1. Core Backend Enhancements: - Building APIs - Strengthening security with OAuth2/JWT, rate-limiting, SecretManager, and enhancing observability through structured logging and tracing - Adding CI/CD, test automation, health checks, and SLO dashboards 2. Awesome UI Interfaces: - Developing UI interfaces using React.js/Next.js, Redact/Context, and various CSS frameworks like Tailwind, MUI, Custom-CSS, and Shadcn 3. LLM & Agentic Services: - Designing micro/mini-services to host and route to platforms such as OpenAI, Anthropic, local HF models, embeddings & RAG pipelines - Implementing autonomous/recursive agents that orchestrate multi-step chains for Tools, Memory, and Planning 4. Model-Inference Infrastructure: - Setting up GPU/CPU inference servers behind an API gateway - Optimizing throughput with techniques like batching, streaming, quantization, and caching using tools like Redis and pgvector 5. NLP & Data Services: - Managing the NLP stack with Transformers for classification, extraction, and embedding generation - Building data pipelines to combine aggregated business metrics with model telemetry for analytics You will be working with a tech stack that includes Python, FastAPI, Starlette, Pydantic, Async SQLAlchemy, Postgres, Docker, Kubernetes, AWS/GCP, Redis, RabbitMQ, Celery, Prometheus, Grafana, OpenTelemetry, and more. Experience in building production Python REST APIs, SQL schema design in Postgres, async patterns & concurrency, UI application development, RAG, LLM/embedding workflows, cloud container orchestration, and CI/CD pipelines is essential for this role. Additionally, experience with streaming protocols, NGINX Ingress, SaaS security hardening, data privacy, event-sourced data models, and other related technologies would be advantageous. This role offers the opportunity to work on evolving products, tackle real challenges, and lead the scaling of AI services while working closely with the founder to shape the future of the platform. If you are looking for meaningful ownership and the chance to solve forward-looking problems, this role could be the right fit for you.,
Posted 2 weeks ago
6.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project.
Posted 2 weeks ago
7.0 - 10.0 years
17 - 27 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
5+ years of working experience in Python 4+ years of hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, Cloud formation, Docker, ECS, Fargate, and ECR. Very strong hands-on knowledge on using Python for integrations between systems through different data formats Expert in deploying and maintaining the applications in AWS and Hands on experience in Kinesis streams, Auto-scaling Team player with very good written and communication skills Strong problem solving and decision-making skills Ability to solve complex software system issues Collaborate with business and other teams to understand business requirements and work on the project deliverables. Participate in requirements gathering and understanding Design a solution based on available framework and code
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer & Community Banking Team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. You will execute creative software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. Your role includes creating secure and high-quality production code and maintaining algorithms that run synchronously with appropriate systems. You will produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development. Additionally, you will gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. You will also build Microservices that will run on the bank's internal cloud and the public cloud platform (AWS) and collaborate with teams in multiple regions and time zones. Participation in scrum team stand-ups, code reviews, and other ceremonies, contributing to task completion and blocker resolution within your team is expected. Required qualifications, capabilities, and skills include formal training or certification on software engineering concepts and 3+ years of applied experience in Java, AWS, Terraforms. You should have experience with technologies like Java 11/17, Spring/Spring Boot, Kafka, Relational/Non-Relational Databases such as Oracle, Cassandra, Dynamo, Postgres. A minimum of 3 years of hands-on experience on the Public Cloud platform using AWS for building secure Microservices is required. Hands-on experience with AWS services like EKS, Fargate, SQS/SNS/Eventbridge, Lambda, S3, EBS, Dynamo/Arora Postgres DB, and Terraform scripts is essential. Additionally, experience with DevOps concepts for automated build and deployment is crucial. Preferred qualifications, capabilities, and skills include familiarity with modern front-end technologies and exposure to cloud technologies.,
Posted 2 weeks ago
5.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project.
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Cloud Data Integration Consultant, you will be responsible for leading a complex data integration project that involves API frameworks, a data lakehouse architecture, and middleware solutions. The project focuses on technologies such as AWS, Snowflake, Oracle ERP, and Salesforce, with a high transaction volume POS system. Your role will involve building reusable and scalable API frameworks, optimizing middleware, and ensuring security and compliance in a multi-cloud environment. Your expertise in API development and integration will be crucial for this project. You should have deep experience in managing APIs across multiple systems, building reusable components, and ensuring bidirectional data flow for real-time data synchronization. Additionally, your skills in middleware solutions and custom API adapters will be essential for integrating various systems seamlessly. In terms of cloud infrastructure and data processing, your strong experience with AWS services like S3, Lambda, Fargate, and Glue will be required for data processing, storage, and integration. You should also have hands-on experience in optimizing Snowflake for querying and reporting, as well as knowledge of Terraform for automating the provisioning and management of AWS resources. Security and compliance are critical aspects of the project, and your deep understanding of cloud security protocols, API security, and compliance enforcement will be invaluable. You should be able to set up audit logs, ensure traceability, and enforce compliance across cloud services. Handling high-volume transaction systems and real-time data processing requirements will be part of your responsibilities. You should be familiar with optimizing AWS Lambda and Fargate for efficient data processing and be skilled in operational monitoring and error handling mechanisms. Collaboration and support are essential for the success of the project. You will need to provide post-go-live support, collaborate with internal teams and external stakeholders, and ensure seamless integration between systems. To qualify for this role, you should have at least 10 years of experience in enterprise API integration, cloud architecture, and data management. Deep expertise in AWS services, Snowflake, Oracle ERP, and Salesforce integrations is required, along with a proven track record of delivering scalable API frameworks and handling complex middleware systems. Strong problem-solving skills, familiarity with containerization technologies, and experience in retail or e-commerce industries are also desirable. Your key responsibilities will include leading the design and implementation of reusable API frameworks, optimizing data flow through middleware systems, building robust security frameworks, and collaborating with the in-house team for seamless integration between systems. Ongoing support, monitoring, and optimization post-go-live will also be part of your role.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
telangana
On-site
As a Senior Software Engineer II at Marriott Tech Accelerator in Hyderabad, India, you will play a crucial role in leading the design, solutioning, and delivery of large-scale enterprise applications. Your primary focus will be on product development and solving complex problems with innovative solutions. Your responsibilities will include providing technical leadership by training and mentoring team members, offering financial input on budgets, and identifying opportunities to enhance service delivery processes. You will be responsible for delivering technology by conducting quantitative and qualitative analyses, ensuring project completion within scope, and coordinating with IT and vendor relations teams. In terms of IT governance, you will adhere to defined standards and processes while maintaining a balance between business and operational risk. You will also be involved in service provider management, validating project plans, monitoring outcomes, and resolving service delivery problems promptly. To excel in this role, you should have 6-8 years of software development experience with a deep understanding of integration patterns, API management platforms, and various design architectures. Your expertise should cover a wide range of technologies including Java, GraalVM, NoSQL, Spring Boot, Docker, Kubernetes, AWS, and more. Additionally, you should have hands-on experience with DevOps, CI/CD pipelines, infrastructure components, and cloud-native design patterns. Your background should include leading integration solutions development, architecting distributed systems, and working with microservices and serverless technologies. Ideally, you should have a bachelor's degree or equivalent experience/certification. Your ability to work in a hybrid mode and collaborate effectively in an agile development environment with a mix of onshore and offshore teams will be crucial for success in this role. If you are a results-oriented individual with a passion for cutting-edge technology and a track record of technology leadership, we encourage you to apply for this exciting opportunity at Marriott Tech Accelerator.,
Posted 2 weeks ago
3.0 - 8.0 years
7 - 17 Lacs
Coimbatore
Remote
Role Overview As an AWS DevOps Engineer, youll own the end-to-end infrastructure lifecyclefrom design and provisioning through deployment, monitoring, and optimization. Youll collaborate closely with development teams to implement Infrastructure as Code, build robust CI/CD pipelines, enforce security and compliance guardrails, and integrate next-gen tools like Google Gemini for automated code-quality and security checks. Summary DevOps Engineer with 3+ years of experience in AWS infrastructure, CI/CD, and IaC, capable of designing secure, production-grade systems with zero-downtime deployments. The ideal candidate excels in automation, observability, and compliance within a collaborative engineering environment. Top Preferred Technologies: Terraform – core IaC tool for modular infrastructure design Amazon ECS/EKS (Fargate) – container orchestration and deployment GitHub Actions / AWS CodePipeline + CodeBuild – modern CI/CD pipelines Amazon CloudWatch – observability, custom metrics, and centralized logging IAM, KMS & GuardDuty – for access control, encryption, and threat detection SSM Parameter Store – for secure config and secret management Python / Bash / Node.js – for scripting, automation, and Lambda integration Key Responsibilities Infrastructure as Code (IaC): Design, build, and maintain Terraform (or CloudFormation) modules for VPCs, ECS/EKS clusters, RDS, ElastiCache, S3, IAM, KMS, and networking across multiple Availability Zones. Produce clear architecture diagrams (Mermaid or draw.io) and documentation. CI/CD Pipeline Development: Implement GitHub Actions or AWS CodePipeline/CodeBuild workflows to run linting, unit tests, Terraform validation, Docker builds, and automated deployments (zero-downtime rolling updates) to ECS/EKS. Integrate unit tests (Jest, pytest) and configuration-driven services (SSM Parameter Store). Monitoring & Alerting: Define custom CloudWatch metrics (latency, error rates), create dashboards, and centralize application logs in CloudWatch Logs with structured outputs and PII filtration. Implement CloudWatch Alarms with SNS notifications for key thresholds (CPU, replica lag, 5xx errors). Security & Compliance: Enable and configure GuardDuty and AWS Config rules (e.g., public-CIDR security groups, unencrypted S3 or RDS). Enforce least-privilege IAM policies, key-management with KMS, and secure secret storage in SSM Parameter Store. Innovative Tooling Integration: Integrate Google Gemini (or similar) into the CI pipeline for automated Terraform security scans and generation of actionable “security reports” as PR comments. Documentation & Collaboration: Maintain clear README files, module documentation, and step-by-step deployment guides. Participate in code reviews, design discussions, and post-mortems to continuously improve our DevOps practices. Required Qualifications Experience: 3+ years in AWS DevOps or Site Reliability Engineering roles, designing and operating production-grade cloud infrastructure. Technical Skills: Terraform (preferred) or CloudFormation for IaC. Container orchestration: ECS/Fargate or EKS with zero-downtime deployments. CI/CD: GitHub Actions, AWS CodePipeline, and CodeBuild (linting, testing, Docker, Terraform). Monitoring: CloudWatch Dashboards, custom metrics, log centralization, and alarm configurations. Security & Compliance: IAM policy design, KMS, GuardDuty, AWS Config, SSM Parameter Store. Scripting: Python, Bash, or Node.js for automation and Lambda functions. Soft Skills: Strong problem-solving mindset and attention to detail. Excellent written and verbal communication for documentation and cross-team collaboration. Ability to own projects end-to-end and deliver under tight timelines. Wil have to attend Coimbatore office on request (Hybrid) Preferred Qualifications Hands-on experience integrating third-party security or code-analysis APIs (e.g., Google Gemini, Prisma Cloud). Familiarity with monitoring and observability best practices, including custom metric creation. Exposure to multi-cloud environments or hybrid cloud architectures. Certification: AWS Certified DevOps Engineer – Professional or AWS Certified Solutions Architect – Associate.
Posted 3 weeks ago
6.0 - 11.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 11+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
10.0 - 12.0 years
32 Lacs
Hyderabad, Telangana, India
On-site
Job Description Experian is seeking a seasoned Software Engineering Manager to lead a team of talented cloud-native Java and Node.js engineers supporting our enterprise-grade, consumer-permissioned data platform. This role is pivotal in driving the development and delivery of scalable, secure, and high-performance services in a cloud-native environment. You will collaborate closely with cross-functional teams based in the U.S., including Engineering, Quality Assurance, Product Management, and Project Management, to ensure alignment on requirements, timelines, and deliverables. This role's primary responsibility is managing the team, but the ideal candidate should also be capable of contributing to the codebase as time permits using Java, Spring, and Node.js in an AWS environment. Qualifications 10+ years of hands-on experience as a software engineer, with strong proficiency in Java and Node.js. Experience building and scaling enterprise data platforms. Diligently observe and help maintain Standards for Regulatory Compliance and Information Security. Familiarity with data privacy and security best practices preferred. 5+ years of experience managing software development teams. Lead, mentor, and grow a team of software engineers working on cloud-native applications. Oversee the delivery of well-tested, robust, and efficient software while following software development best practices. Ensure high-quality software development practices including code reviews, testing, and CI/CD. Collaborate with U.S.-based stakeholders to define technical requirements, project scope, and delivery timelines. Solid understanding of Agile/Scrum methodologies. Excellent communication, collaboration, and mentoring skills. Own deliverables from ideation to production operationalization. Experience working with distributed teams across time zones preferred. Proven experience working in cloud environments, preferably AWS. Strong understanding of AWS services including ECS Fargate, S3, RDS, Lambda, SQS, MSK (or Kafka). Experience with NATS.io is a plus. Proven experience integrating with third-party HTTP APIs, typically leveraging JSON payloads. Java engineers should have strong experience with Spring and Spring Cloud frameworks. Proficiency with development and monitoring tools such as GitHub, Splunk, DataDog, Jira. Contribute to the codebase as needed, providing hands-on support and technical guidance. Foster a culture of continuous improvement, innovation, and accountability. Drive adoption of best practices in cloud architecture, microservices, and DevOps. Troubleshoot system functionality and performance using tools like Splunk and DataDog. Foster a culture of continuous improvement, innovation, and accountability. Drive adoption of best practices in cloud architecture, microservices, and DevOps. Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning World's Best Workplaces 2024 (Fortune Top 25), Great Place To Work in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together
Posted 1 month ago
7.0 - 12.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 12+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
4.0 - 9.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 4 to 9+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough