Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title : AWS Data Engineer bluCognition is an AI/ML based start-up specializing in developing data products leveraging alternative data sources and providing servicing support to our clients in financial services sector. Founded in 2017, by some very named senior professionals from the financial services industry, the company is headquartered in the US, with the delivery centre based in Pune. We build all our solutions while leveraging the latest technology stack in AI, ML and NLP combined with decades of experience in risk management at some of the largest financial services firms in the world. Our clients are some of the biggest and the most progressive names in the financial services industry. We are entering a significant growth phase and are looking for individuals with entrepreneurial mindset who wants us to join in this exciting journey. https://www.blucognition.com The Role : We are seeking an experienced AWS Data Engineer to design, build, and manage scalable data pipelines and cloud-based solutions. In this role, you will work closely with data scientists, analysts, and software engineers to develop systems that support data-driven decision-making. Key Responsibilities: Design, implement, and maintain robust, scalable, and efficient data pipelines using AWS services. Develop ETL/ELT processes and automate data workflows for real-time and batch data ingestion. Optimize data storage solutions (e.g., S3, Redshift, RDS, DynamoDB) for performance and cost-efficiency. Build and maintain data lakes and data warehouses following best practices for security, governance, and compliance. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Monitor, troubleshoot, and improve the reliability and quality of data systems. Implement data quality checks, logging, and error handling in data pipelines. Use Infrastructure as Code (IaC) tools like AWS Cloud Formation or Terraform for environment management. Stay up-to-date with the latest developments in AWS services and big data technologies. Required Qualifications: Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field. 4+ years of experience working as a data engineer or in a similar role. Strong experience with AWS services such as: AWS Glue AWS Lambda Amazon S3 Amazon Redshift Amazon RDS Amazon EMR AWS Step Functions Proficiency in SQL and Python . Solid understanding of data modeling, ETL processes, and data warehouse architecture. Experience with orchestration tools like Apache Airflow or AWS Managed Workflows. Knowledge of security best practices for cloud environments (IAM, KMS, VPC, etc.). Experience with monitoring and logging tools (CloudWatch, X-Ray, etc.). Preferred Qualifications: Good to have - AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect certification. Experience with real-time data streaming technologies like Kinesis or Kafka. Familiarity with DevOps practices and CI/CD pipelines. Knowledge of machine learning data preparation and MLOps workflows. Soft Skills: Excellent problem-solving and analytical skills. Strong communication skills with both technical and non-technical stakeholders. Ability to work independently and collaboratively in a team environment. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for a skilled AWS DevOps Engineer with 5+ years of experience who is proficient in AWS ecosystem. The person will be responsible for collaborating with software developers, system operators, and other IT staff members to manage availability, scalability and security. This role requires a strong understanding of client-server/peer-to-peer communication, development operations, cloud infrastructure management and automation tools. Job Location: Gurugram, Haryana Work from Office Responsibilities Design, implement, and manage continuous integration and deployment pipelines. Collaborate with development and operations teams to streamline code deployment processes. Monitor and optimise AWS infrastructure for performance, cost, and security. Implement and maintain automated monitoring and alerting systems. Troubleshoot and resolve issues related to infrastructure and application deployment Ensure compliance with best practices for security, scalability, and reliability Stay current with industry trends and best practices in DevOps and cloud technologies Education and Experience Required Must have: Bachelor’s degree in Computer Science. 5+ years of experience in a DevOps role using AWS. Strong proficiency in AWS domains such as IAM, Compute, Storage, Scalability and Networking. Strong proficiency in Linux flavours, utilities and scripting. Strong problem-solving abilities and attention to detail. Excellent communication skills. Good to have: Hands-on in AWS Security, IPv6 and Python. Practical experience with AWS managed services like RDS, Elasticache, MSK. Good knowledge of networking, VPC, subnet, subnet masking, Elastic IPs, route tables, routing, access control list, NAT and port address translation. Experience with infrastructure as code tools such as Jenkins and Ansible. Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. Show more Show less
Posted 3 weeks ago
1.0 - 3.0 years
3 - 7 Lacs
Mumbai
Work from Office
about the role Cloud Engineers with experience in managing, planning, architecting, monitoring, and automating large scale deployments to Public Cloud.you will be part of a team of talented engineers to solve some of the most complex and exciting challenges faced in IT Automation and Hybrid Cloud Deployments. key responsibilities Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML technologies Design, deploy and maintain Cloud infrastructure for Clients Domestic & International Develop tools and automation to make platform operations more efficient, reliable and reproducible Create Container Orchestration (Kubernetes, Docker), strive for full automated solutions, ensure the up-time and security of all cloud platform systems and infrastructure Stay up to date on relevant technologies, plug into user groups, and ensure our client are using the best techniques and tools Providing business, application, and technology consulting in feasibility discussions with technology team members, customers and business partners Take initiatives to lead, drive and solve during challenging scenarios preferred qualifications 1-3 years experience in Cloud Infrastructure and Operations domains Experience with Linux systems and/OR Windows servers Specialize in one or two cloud deployment platforms: AWS, GCP, Azure Hands on experience with AWS services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine) Experience with one or more programming languages (Python, JavaScript, Ruby, Java, .Net) Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios Logging and Monitoring tools (ELK, Stackdriver, CloudWatch) DevOps Technologies Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos) Deep experience in customer facing roles with a proven track record of effective verbal and written communications Dependable and good team player Desire to learn and work with new technologies Automation in your blood
Posted 3 weeks ago
0.0 - 3.0 years
0 Lacs
Sukhlia, Indore, Madhya Pradesh
Remote
Job Title: AWS & DevOps Engineer Department: DevOps Location: Indore Job Type: Full-time Experience: 3-5 years Notice Period: 0-15 days (immediate joiners preferred) Work Arrangement: On-site (Work from Office) Advantal Technologies is looking for a skilled AWS & DevOps Engineer to help build and manage the cloud infrastructure. This role involves designing scalable infrastructure, automating deployments, enforcing security, and supporting a hybrid (AWS + open-source) deployment strategy. Key Responsibilities: AWS Cloud Infrastructure: · Design, provision, and manage secure and scalable cloud architecture on AWS. · Configure and manage core services: VPC, EC2, S3, RDS (PostgreSQL), Lambda, CloudFront, Cognito, and IAM. · Deploy AI models using Amazon SageMaker for inference at scale. · Manage API integrations via Amazon API Gateway and AWS WAF. DevOps & Automation: · Implement CI/CD pipelines using AWS CodePipeline, GitHub Actions, or GitLab CI. · Containerize backend applications using Docker and orchestrate with AWS ECS/Fargate or Kubernetes (for on-prem/hybrid). · Use Terraform or AWS CloudFormation for Infrastructure as Code (IaC). · Monitor applications using CloudWatch, Security Hub, and CloudTrail. Security & Compliance: · Implement IAM policies and KMS key management, and enforce Zero Trust architecture. · Configure S3 object lock, audit logs, and data classification controls. · Support GDPR/HIPAA-ready compliance setup via AWS Config, GuardDuty, and Security Hub. Required Skills & Experience: Must-Have · 3–5 years of hands-on experience in AWS infrastructure and services. · Proficiency with Terraform, CloudFormation, or other IaC tools. · Experience with Docker, CI/CD pipelines, and cloud networking (VPC, NAT, Route 53). · Strong understanding of DevSecOps principles and AWS security best practices. · Experience supporting production-grade SaaS applications. Nice-to-Have: · Exposure to AI/ML model deployment (especially via SageMaker or containerized APIs). · Knowledge of multi-tenant SaaS infrastructure patterns. · Experience with Vault, Keycloak, or open-source IAM/security stacks for non-AWS environments. · Familiarity with Kubernetes (EKS or self-hosted). Tools & Stack You'll Use: · AWS (Lambda, RDS, S3, SageMaker, Cognito, CloudFront, CloudWatch, API Gateway) · Terraform, Docker, GitHub Actions · CI/CD: GitHub, GitLab, AWS CodePipeline · Monitoring: CloudWatch, GuardDuty, Prometheus (non-AWS) · Security: KMS, IAM, Vault Please share resume to hr@advantal.net Job Types: Full-time, Permanent Pay: ₹261,624.08 - ₹1,126,628.25 per year Benefits: Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Ability to commute/relocate: Sukhlia, Indore, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred) Experience: AWS DevOps: 3 years (Required) Work Location: In person Speak with the employer +91 9131295441 Expected Start Date: 02/06/2025
Posted 3 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary: We are looking for a Site Reliability Engineer for the SRE team in India who will be responsible for ensuring the reliability, availability, and performance of software systems by applying software engineering principles to operations. Responsibilitie s Ensure the reliability, scalability, and performance of our company's production environment, including complex architecture with multiple servers, deployment & various cloud technologies Ability to collaborate with cross-functional teams, work independently, and prioritize effectively in a fast-paced environment Effectively oversee and enhance monitoring capabilities for the production environment and ensure optimal performance and functionality across the technology stack Demonstrates flexibility to support our 24/7 operations and is willing to participate in on-call rotations to ensure timely incident response and resolution Effectively address and resolve unexpected service issues while also creating and implementing tools and automation measures to proactively mitigate the likelihood of future problems Qualification Minimum 5 years of experience in an SRE/DevOps position for SaaS-based products Experience in managing mission-critical production environments Experience with version control tools like GIT, Bitbucket, etc Experience in establishing CI/CD procedures with Jenkins Working knowledge of databases Experience in effectively managing AWS infrastructure, demonstrating proficiency across multiple AWS Cloud services including networking, EC2, VPC, EKS, ELB/NLB, API GW, Cognito, and more Experience in monitoring tools like Datadog, ELK, Prometheus, and Grafana et Experience in understanding and managing Linux infrastructure Experience in Bash or Python Experience with IaC like CloudFormation / CDK / Terraform Experience in Kubernetes and container management Possesses excellent written and verbal communication skills in English, allowing for effective and articulate correspondence Advantage Additional cloud services knowledge (Azure, GCP, etc) Understanding of Java, Maven, and NodeJS-based applications Experience in serverless architecture Show more Show less
Posted 3 weeks ago
10.0 - 15.0 years
25 - 35 Lacs
Noida
Work from Office
Cloud Security Lead/Architect(L3) Experience architecting security in cloud platforms like AWS, Azure. Experience creating High Level Designing (HLD) - Low-level Designing (LLD), reviewing the technical requirement document (TRD) for cloud security. Define data security policies through AIP,DLP,Etc Thereat hunting experiences with XRD,EDR,SIEM tools. Experience integrating cloud components with SIEM Planning, implementing, designing and reviewing security policies and other compliances. Experience leading SecOps teams. Guide the team on appropriate prioritization of qualified incidents, Notification through standard communication channel and opening of corresponding incident tickets on Ticketing platform Provide subject matter expertise on information security architecture and systems engineering to other IT and business teams Leading IR, Escalations towards closure. Responsible for automating security controls, data and processes to provide improved metrics and operational support Mandatory certifications on Azure,AWS platforms,CCSP,etc. Secondary skillset in Google cloud is Preferred.
Posted 3 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About TwoSD (2SD Technologies Limited) TwoSD is the innovation engine of 2SD Technologies Limited , a global leader in product engineering, platform development, and advanced IT solutions. Backed by two decades of leadership in technology, our team brings together strategy, design, and data to craft transformative solutions for global clients. Our culture is built around cultivating talent, curiosity, and collaboration. Whether you're a career technologist, a self-taught coder, or a domain expert with a passion for real-world impact, TwoSD is where your journey accelerates. Join us and thrive. At 2SD Technologies, we push past the expected—with insight, integrity, and a passion for making things better. Role Overview We are seeking a DevOps / Cloud Engineer with strong experience in AWS preffered (Azure / GCP) to build, deploy, and optimize cloud-native applications and infrastructure. This is a full-time position based in Gurugram, India , focused on accelerating deployment pipelines, improving reliability, and implementing security and cost-efficient best practices. Key Responsibilities Provision, monitor, and maintain cloud infrastructure (primarily AWS) using IaC (Terraform, CloudFormation) Design and manage scalable CI/CD pipelines using GitHub Actions, Jenkins, or similar tools Automate deployment, scaling, and monitoring of containerized applications (ECS, EKS) Implement logging, observability, and alerting tools for all environments Collaborate with developers, architects, and security teams to streamline DevSecOps workflows Perform regular security reviews, patching, and hardening of cloud and container infrastructure Required Qualifications Bachelor’s degree in Computer Science, Engineering, or equivalent experience 3+ years of DevOps or Cloud Engineering experience Hands-on with Docker, Kubernetes, Helm, and Infrastructure as Code Proficient in AWS services like EC2, S3, Lambda, RDS, VPC, IAM, CloudWatch Experience with CI/CD tools (GitHub Actions, Jenkins, GitLab CI) Strong scripting skills (Python, Bash, or Shell) Preferred Qualifications AWS Certifications (e.g., Solutions Architect Associate, DevOps Engineer) Experience with multi-cloud or hybrid cloud setups Familiarity with GitOps workflows using ArgoCD or Flux Experience with security tools like HashiCorp Vault, AWS Secrets Manager Exposure to cost optimization tools and FinOps best practices Core Competencies Cloud Infrastructure Design & Monitoring Automation & Infrastructure as Code Continuous Integration / Delivery (CI/CD) DevSecOps & Compliance Practices Problem Solving & Debugging Under Pressure Tools & Platforms Cloud: AWS (ECS, EKS, Lambda, CloudFormation) IaC: Terraform, AWS CDK Containers: Docker, Kubernetes, Helm CI/CD: GitHub Actions, Jenkins, GitLab CI Monitoring: Prometheus, Grafana, ELK Stack, CloudWatch Security: AWS IAM, Secrets Manager, Vault Scripting: Bash, Python, Shell Version Control & PM: Git, Jira, Notion, Slack Why Join TwoSD? At TwoSD , innovation isn’t a department—it’s a mindset. Here, your voice matters, your expertise is valued, and your growth is supported by a collaborative culture that blends mentorship with autonomy. With access to cutting-edge tools, meaningful projects, and a global knowledge network, you’ll do work that counts—and evolve with every challenge. DevOps / Cloud Engineer Location: Gurugram, India (Onsite/Hybrid) Company: TwoSD (2SD Technologies Limited) Industry: Cloud Engineering / DevOps Employment Type: Permanent Date Posted: 26 May 2025 How to Apply To apply, send your updated resume and relevant links (portfolio/GitHub) to hr@2sdtechnologies.com or visit our LinkedIn careers page. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Senior DevOps Engineer Experience: 5 - 9 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Onsite (Ahmedabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Azure (Microsoft Azure), Docker/Terraform, TensorFlow, Python, AWS Good to have skills : Kubeflow, MLFlow Attri (One of Uplers' Clients) is Looking for: Senior DevOps Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About Attri Attri is an AI organization that helps businesses initiate and accelerate their AI efforts. We offer the industry’s first end-to-end enterprise machine learning platform, empowering teams to focus on ML development rather than infrastructure. From ideation to execution, our global team of AI experts supports organizations in building scalable, state-of-the-art ML solutions. Our mission is to redefine businesses by harnessing cutting-edge technology and a unique, value-driven approach. With team members across continents, we celebrate diversity, curiosity, and innovation. We’re now looking for a Senior DevOps Engineer to join our fast-growing, remote-first team. If you're passionate about automation, scalable cloud systems, and supporting high-impact AI workloads, we’d love to connect. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure. Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation. Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS, or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog, and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines (Airflow, ML Pipelines) and Bedrock with Tensorflow or PyTorch. Implement and optimize ETL/data streaming pipelines using Kafka, EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash, along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare, VPC, and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. Benefits: Competitive Salary 💸 Support for continual learning (free books and online courses) 📚 Leveling Up Opportunities 🌱 Diverse team environment 🌍 How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Attri, an AI organization, leads the way in enterprise AI, offering advanced solutions and services driven by AI agents and powered by Foundation Models. Our comprehensive suite of AI-enabled tools drives business impact, enhances quality, mitigates risk, and also helps unlock growth opportunities. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Client: Our client is global technology consulting and digital solutions company that enables enterprises to reimagine business models and accelerate innovation through digital technologies. Powered by more than 84,000 entrepreneurial professionals across more than 30 countries it covers to over 700 clients. With its extensive domain and technology expertise helps drive superior competitive differentiation, customer experiences, and business outcomes. Job Title : GCP DevOps Engineer Key Skills : Kubernetes, CICD , Linux, Terraform Job Locations : PAN India Experience : 10+ Years Education Qualification : Any Graduation Work Mode : On-site Employment Type : Contractual Notice Period : Immediate - 10 Days Job Description – GCP DevOps Engineer Primary Skills: • Kubernetes (GKE, EKS, AKS) • Logging and monitoring (Grafana, Splunk, Datadog) • Networking (Service Mesh, Istio) • Serverless architecture (GCP Functions, AWS Lambda) Good to have: • Monitoring tools (Grafana, Prometheus, etc.) • Networking (VPC, DNS, Load Balancing) Responsibilities: • Design develop and maintain a scalable and highly available cloud infrastructure • Automate and streamline operations and processes • Monitor and troubleshoot system issues • Create and maintain documentation • Develop and maintain tools to automate operational tasks • Collaborate with software engineers to develop and deploy software applications • Develop and manage automated deployment pipelines • Utilize Continuous Integration and Continuous Delivery CICD tools and practices • Provision and maintain cloud-based databases • Optimize resources to reduce costs • Analyse and optimize system performance • Work with the development team to ensure code quality and security • Ensure compliance with security and other industry standards • Keep up with the latest technologies and industry trends • Proficient in scripting languages such as Python BASH PowerShell etc. • Experience with configuration management tools such as Chef Puppet and Ansible • Experience with CICD tools such as Jenkins TravisCI and CircleCI • Experience with container-based technologies such as Docker Kubernetes and ECS • Experience with version control systems such as Git • Understanding of network protocols and technologies • Ability to prioritize tasks and work independently • Strong problem solving and communication skills • Should be able to implement and maintain a highly available scalable and secure cloud infrastructure Show more Show less
Posted 3 weeks ago
0.0 - 50.0 years
0 Lacs
Jalandhar, Punjab
On-site
Overview PENNEP is looking for a DevOps Engineer to support our growing infrastructure and development operations. This role is ideal for someone who thrives in a dynamic environment, enjoys optimizing systems for performance and security, and collaborates closely with developers to streamline delivery processes. The candidate will work with cloud services, automation tools, and CI/CD pipelines to ensure our infrastructure is scalable, reliable, and secure. Responsibilities Design, implementation, and maintenance of IT infrastructure with a focus on scalability, reliability, and security. Support the administration of domain controllers and directory services to ensure seamless user authentication and access control. Help deploy and manage virtualised servers and AWS cloud services such as EC2, S3, IAM, and VPC. Collaborate with the development team to improve CI/CD pipelines using Bitbucket and Jenkins. Monitor system performance, identify bottlenecks or issues, and assist in troubleshooting to minimise downtime. Learn and apply best practices for configuration management, version control, and automated testing. Maintain system documentation and operational procedures for supported environments. Stay informed of emerging technologies and industry trends to contribute innovative and practical improvements. Assist in implementing infrastructure as code (IaC) to improve deployment consistency and efficiency. Support the team in automating repetitive tasks to reduce manual errors and save time. Required Skills and Experience 1+ years of experience in a DevOps, Site Reliability Engineering (SRE), or Systems Engineering role. Strong working knowledge of AWS services (EC2, S3, IAM, VPC). Experience with CI/CD tools such as Jenkins, Bitbucket Pipelines, or similar. Familiarity with version control systems (Git preferred). Experience with infrastructure monitoring and alerting tools (e.g., CloudWatch, Prometheus, Nagios). Understanding of networking concepts, security protocols, and access control. Exposure to configuration management tools (Ansible, Terraform, etc.) is a plus. About PENNEP PENNEP works with national, multinational clients and strives to become one of the world's leading professional services companies with a vision to transform clients' business, operating, and technology models for the digital era. Our leaders have 50 years of combined and unique industry knowledge with a consultative approach that helps clients envision, build, and run more innovative and efficient businesses. Applicants may be required to attend interviews in person or by video conference. In addition, candidates may be required to present their current state or government-issued ID during each interview. PENNEP is an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, colour, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law. We provide engineering excellence practices training for all our employees. Job Types: Full-time, Permanent Pay: From ₹22,000.00 per month Benefits: Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Yearly bonus Application Question(s): Have you worked with AWS services such as EC2, S3, IAM, or VPC? Have you configured or maintained CI/CD pipelines using Bitbucket and/or Jenkins? Are you familiar with configuration management tools such as Ansible, Terraform, or similar? Do you have experience with version control systems like Git? Are you from Punjab? We are looking for a local candidate from Punjab. At this point, we are not hiring Pan India. Work Location: In person Speak with the employer +91 7508736637 Expected Start Date: 02/06/2025
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. · Job Title: GCP Devops Engineer · Location: PAN INDIA(Hybrid) · Experience: 10+ yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills: GCP Devops Engineer Job Description – GCP Primary Skills: • GCP • Kubernetes (GKE, EKS, AKS) • Logging and monitoring (Grafana, Splunk, Datadog) • Networking (Service Mesh, Istio) • Serverless architecture (GCP Functions, AWS Lambda) Good to have: • Monitoring tools (Grafana, Prometheus, etc.) • Networking (VPC, DNS, Load Balancing) Responsibilities: • Design develop and maintain a scalable and highly available cloud infrastructure • Automate and streamline operations and processes • Monitor and troubleshoot system issues • Create and maintain documentation • Develop and maintain tools to automate operational tasks • Collaborate with software engineers to develop and deploy software applications • Develop and manage automated deployment pipelines • Utilize Continuous Integration and Continuous Delivery CICD tools and practices • Provision and maintain cloud-based databases • Optimize resources to reduce costs • Analyse and optimize system performance • Work with the development team to ensure code quality and security • Ensure compliance with security and other industry standards • Keep up with the latest technologies and industry trends • Proficient in scripting languages such as Python BASH PowerShell etc. • Experience with configuration management tools such as Chef Puppet and Ansible • Experience with CICD tools such as Jenkins TravisCI and CircleCI • Experience with container-based technologies such as Docker Kubernetes and ECS • Experience with version control systems such as Git • Understanding of network protocols and technologies • Ability to prioritize tasks and work independently • Strong problem solving and communication skills • Should be able to implement and maintain a highly available scalable and secure cloud infrastructure Seniority Level Mid-Senior level Industry IT Services and IT Consulting Employment Type Contract Job Functions Business Development Consulting Skills GCP Devops Terraform Cloud Infrastructure Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 3 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) 15 years full time education Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) 15 years full time education Show more Show less
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary We are seeking a highly motivated and experienced DevOps Engineer to join our dynamic team. The ideal candidate will have 3-5 years of hands-on experience in building, maintaining, and optimizing scalable, secure, and reliable infrastructure and continuous delivery pipelines. Responsibilities You will play a crucial role in bridging the gap between development and operations, automating processes, and ensuring the smooth and efficient deployment of our Responsibilities : CI/CD Pipeline Management : Design, implement, and maintain robust Continuous Integration and Continuous Delivery (CI/CD) pipelines using tools like Jenkins, GitLab CI/CD, Azure DevOps, or similar. Infrastructure as Code (IaC) : Develop and manage infrastructure using IaC principles and tools such as Terraform, CloudFormation, or Ansible to ensure consistency and repeatability across environments. Cloud Platform Management : Administer and optimize cloud infrastructure (AWS, Azure, GCP) including compute, storage, networking, and security services. Containerization & Orchestration : Implement and manage containerization technologies (Docker) and orchestration platforms (Kubernetes) for scalable application deployment. Monitoring & Logging : Set up and maintain comprehensive monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK Stack, Datadog) to ensure system health and performance. Automation : Automate repetitive tasks across the software development lifecycle, from build and test to deployment and operations. Security Best Practices : Implement and enforce security best practices throughout the infrastructure and CI/CD pipelines, including vulnerability scanning, access control, and compliance. Troubleshooting & Support : Provide operational support, troubleshoot issues, and perform root cause analysis for production and non-production environments. Collaboration : Work closely with development, QA, and product teams to understand requirements, provide infrastructure solutions, and improve overall system reliability. Documentation : Create and maintain detailed documentation for infrastructure, processes, and troubleshooting guides. Performance Optimization : Identify and implement performance improvements and cost optimizations for cloud Qualifications Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience. 3-5 years of professional experience as a DevOps Engineer, SRE, or similar role. Strong proficiency in at least one major cloud platform (AWS, Azure, or GCP). AWS : EC2, S3, RDS, VPC, IAM, Lambda, CloudWatch, EKS/ECS. Azure : Virtual Machines, Azure SQL Database, VNet, Azure AD, Azure Monitor, AKS. GCP : Compute Engine, Cloud Storage, Cloud SQL, VPC, IAM, Cloud Monitoring, GKE. Extensive experience with CI/CD tools : Jenkins, GitLab CI/CD, Azure DevOps, CircleCI, Travis CI. Solid understanding and hands-on experience with Infrastructure as Code (IaC) tools : Terraform, CloudFormation, or Ansible. Proficiency in containerization (Docker) and container orchestration (Kubernetes). Scripting expertise : Strong skills in Shell scripting (Bash), Python, or Go. Experience with monitoring and logging tools : Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Datadog, Splunk. Familiarity with version control systems : Git (GitHub, GitLab, Bitbucket). Understanding of networking concepts : TCP/IP, DNS, Load Balancing, Firewalls. Experience with Linux/Unix operating systems. Excellent problem-solving, analytical, and communication Qualifications : Experience with configuration management tools like Chef or Puppet. Knowledge of database administration (SQL and NoSQL databases). Experience with serverless architectures (AWS Lambda, Azure Functions, Google Cloud Functions). Certifications in cloud platforms (e.g., AWS Certified DevOps Engineer, Azure DevOps Engineer Expert). Familiarity with Agile/Scrum methodologies. Experience with security tools and practices (e.g., SAST, DAST, WAF) (ref:hirist.tech) Show more Show less
Posted 3 weeks ago
300.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
DevOps Release Engineer- Customer Lifecycle Engineering, CMR ABOUT US: LSEG (London Stock Exchange Group) is more than a diversified global financial markets infrastructure and data business. We are dedicated, open-access partners with a dedication to excellence in delivering the services our customers expect from us. With extensive experience, deep knowledge and worldwide presence across financial markets, we enable businesses and economies around the world to fund innovation, manage risk and create jobs. It’s how we’ve contributed to supporting the financial stability and growth of communities and economies globally for more than 300 years. Through a comprehensive suite of trusted financial market infrastructure services – and our open-access model – we provide the flexibility, stability and trust that enable our customers to pursue their ambitions with confidence and clarity. LSEG is headquartered in the United Kingdom, with significant operations in 65 countries across EMEA, North America, Latin America and Asia Pacific. We employ 25,000 people globally, more than half located in Asia Pacific. LSEG’s ticker symbol is LSEG. OUR PEOPLE: People are at the heart of what we do and drive the success of our business. Our values of Integrity, Partnership, Excellence and Change shape how we think, how we do things and how we help our people fulfil their potential. We embrace diversity and actively seek to attract individuals with unique backgrounds and perspectives. We break down barriers and encourage teamwork, enabling innovation and rapid development of solutions that make a difference. Our workplace generates an enriching and rewarding experience for our people and customers alike. Our vision is to build an inclusive culture in which everyone feels encouraged to fulfil their potential. We know that real personal growth cannot be achieved by simply climbing a career ladder – which is why we encourage and enable a wealth of avenues and interesting opportunities for everyone to broaden and deepen their skills and expertise. As a global organisation spanning 65 countries and one rooted in a culture of growth, opportunity, diversity and innovation, LSEG is a place where everyone can grow, develop and fulfil your potential with meaningful careers. ROLE PROFILE A DevOps engineer designs, implements, and maintains tools and processes for continuous integration, delivery, and deployment of software. They work closely with developers, testers, and system administrators to ensure the entire software development life cycle is smooth, efficient, and error-free. Their primary goal is to automate repetitive tasks, reduce manual intervention, and improve the overall user experience, quality, and reliability of software products. Overall, the daily activities and duties are defined in the software development contract. TECH PROFILE/ESSENTIAL SKILLS Cloud platforms like AWS e.g. EC2, VPC, CloudFront, WAF, ALB, Route53, S3, Lambda. Minimum 3 years of relevant experience in DevOps. Experience in DevSecOps. Knowledge on using Agile and DevOps eg. JIRA, ASANA, Jenkins, Nexus, Maven, Git, Gitlab runner, SonarQube. Expertise and/or understanding in the following languages and technologies: Java (java spring boot), Microservices, JavaScript, services-based architecture, ReactJS, REST. Experience in one or more scripting languages like bash, python or more. Testing frameworks e.g. Junit, TestNG. SSO understanding e.g. SAML, OAuth2. Strong technical acumen. PREFERRED SKILLS AND EXPERIENCE Strong background in Linux and/or Windows Strong coding/scripting experience (BASH/Shell) Experience with configuring and using a CI/CD tool such as Jenkins Experience with provisioning and configuration management using Bitbucket, Ansible, Terraform or equivalent Git source control using Bitbucket or equivalent Experience with AWS - AWS API, EC2, S3, RDS, Lambda, Route53, VPC, CloudFront, CloudWatch, IAM, ADFS, Elastic Cache, Microservices, Elastic Beanstalk Experience with security best practices for data at rest and data in-transit using AWS tools such as SSE-KMS, security policies, cryptographic protocols Excellent verbal and written communication skills Pro-active approach to learning about and adapting to new technologies Knowledge of agile software development and DevOps philosophies Experience with JIRA and Confluence EDUCATION AND PROFESSIONAL SKILLS BE/MS degree in Computer Science, Software Engineering or STEM degree (Desirable). Solid English reading/writing capability required. Good communication & articulation skills. Curious about new technologies and tools, creative thinking and initiative taking. Agile related certifications preferable. DETAILED RESPONSIBILITIES Contributes and leads development of CI/CD pipelines in cooperation with Central DevOps to update existing and support new features Identifies, prioritises, and performs tasks in the software development lifecycle Assist multiple delivery teams by integrating source code and automation frameworks into release, integration and deployment pipelines Advocate for and implement engineering guidelines for excellence, including automation, code reviews, continuous integration/continuous delivery (CI/CD), and performance tuning Maintains environments, ensuring correct tagging, rightsizing, utilisation and documentation Documents platform and application deployment, recovery and support requirements Reviewing and recording cloud health and performance metrics, publishing to development leads on a regular basis Collaborates closely with business and engineering partners to deliver products, services, improvements and solutions to meet customer needs and align with goals of the business and engineering lines Communicates with clarity, precision, and influence, presenting complex information in a clear and concise format that is appropriate for the audience Drive a culture of engineering excellence through mentorship, peer reviews, and promoting standard methodologies in software design and development Continuously optimize systems for performance, scalability, and security in a fast-paced, production environment Support modernisation of infrastructure and architecture of platforms LSEG PURPOSE AND VALUES Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Underpinning our purpose, our values of Integrity, Partnership, Excellence and Change set the standard for everything we do, every day. They guide the way we interact with each other, the partners we work with and our customers. Delivering on our purpose and living up to our values is a responsibility that we all share. To achieve our ambitions through a strong culture, People Leaders need to role model our Values and create the culture for everyone at LSEG to be at their best. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership , Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice. Show more Show less
Posted 3 weeks ago
300.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
DevOps Release Manager - Customer Lifecycle Engineering, CMR ABOUT US: LSEG (London Stock Exchange Group) is more than a diversified global financial markets infrastructure and data business. We are dedicated, open-access partners with a dedication to excellence in delivering the services our customers expect from us. With extensive experience, deep knowledge and worldwide presence across financial markets, we enable businesses and economies around the world to fund innovation, manage risk and create jobs. It’s how we’ve contributed to supporting the financial stability and growth of communities and economies globally for more than 300 years. Through a comprehensive suite of trusted financial market infrastructure services – and our open-access model – we provide the flexibility, stability and trust that enable our customers to pursue their ambitions with confidence and clarity. LSEG is headquartered in the United Kingdom, with significant operations in 65 countries across EMEA, North America, Latin America and Asia Pacific. We employ 25,000 people globally, more than half located in Asia Pacific. LSEG’s ticker symbol is LSEG. OUR PEOPLE: People are at the heart of what we do and drive the success of our business. Our values of Integrity, Partnership, Excellence and Change shape how we think, how we do things and how we help our people fulfil their potential. We embrace diversity and actively seek to attract individuals with unique backgrounds and perspectives. We break down barriers and encourage teamwork, enabling innovation and rapid development of solutions that make a difference. Our workplace generates an enriching and rewarding experience for our people and customers alike. Our vision is to build an inclusive culture in which everyone feels encouraged to fulfil their potential. We know that real personal growth cannot be achieved by simply climbing a career ladder – which is why we encourage and enable a wealth of avenues and interesting opportunities for everyone to broaden and deepen their skills and expertise. As a global organisation spanning 65 countries and one rooted in a culture of growth, opportunity, diversity and innovation, LSEG is a place where everyone can grow, develop and fulfil your potential with meaningful careers. ROLE PROFILE A Release manager, you should have a strong background in modern automation tooling and processes as well as experience maintaining clustered applications. Primary responsibility is to focus on the fine details of assembling different interrelated components and streamline the release management process. To satisfy a complex or solution-based release. You coordinate with different stakeholders for requirements, testing, and release calendar of necessary components. Ensure a synchronous running of day-to-day processes. Planning with several software development teams is a part of the release manager’s duty. You govern and manage schedules to satisfy interdependencies. Your primary aim is to enable continuous delivery of a solution in the hands of the customers as soon as possible. Apart from that, you ensure quality benchmarks are met. TECH PROFILE/ESSENTIAL SKILLS Cloud platforms like AWS e.g. EC2, VPC, CloudFront, WAF, ALB, Route53, S3, Lambda. Minimum 7 years of relevant experience in DevOps. Experience in DevSecOps. Knowledge on using Agile and DevOps eg. JIRA, ASANA, Jenkins, Nexus, Maven, Git, Gitlab runner, SonarQube. Expertise and/or understanding in the following languages and technologies: Java (java spring boot), Microservices, JavaScript, services-based architecture, ReactJS, REST. Experience in one or more scripting languages like bash, python or more. Testing frameworks e.g. Junit, TestNG. SSO understanding e.g. SAML, OAuth2. Strong technical acumen. PREFERRED SKILLS AND EXPERIENCE Strong background in Linux and/or Windows Strong coding/scripting experience (BASH/Shell) Experience with configuring and using a CI/CD tool such as Jenkins Experience with provisioning and configuration management using Bitbucket, Ansible, Terraform or equivalent Git source control using Bitbucket or equivalent Experience with AWS - AWS API, EC2, S3, RDS, Lambda, Route53, VPC, CloudFront, CloudWatch, IAM, ADFS, Elastic Cache, Microservices, Elastic Beanstalk Experience with security best practices for data at rest and data in-transit using AWS tools such as SSE-KMS, security policies, cryptographic protocols Excellent verbal and written communication skills Pro-active approach to learning about and adapting to new technologies Knowledge of agile software development and DevOps philosophies Experience with JIRA and Confluence EDUCATION AND PROFESSIONAL SKILLS BE/MS degree in Computer Science, Software Engineering or STEM degree (Desirable). Solid English reading/writing capability required. Good communication & articulation skills. Curious about new technologies and tools, creative thinking and initiative taking. Agile related certifications preferable. DETAILED RESPONSIBILITIES Build code to provision the full stack from infrastructure through to application Deploying, automating, maintaining, and managing DevOps infrastructure, to ensure the availability, performance, scalability, and security of the infrastructure Build CI/CD pipelines to orchestrate the promotion of code Ensure that all operational responsibilities are performed including monitoring systems and deploying releases Support development changes, providing accurate estimates for time and capital costs, attending project meetings, and analysing impact for proposed project work and meeting deadlines Work closely with the Agile team to ensure operational requirements are met and changes impacted and implemented Troubleshooting and problem solving across the DevOps infrastructure. Suggesting architecture improvements, recommending process improvements Evaluate new technology options and vendor products Work in cloud technologies and porting existing systems to cloud and automate them. Automation of build, integrate and test cycles related to the product. Implement our technical roadmap as we scale our services and build new products Advocate and evangelise best practice engineering Have familiarity with security and operational best practices Working in a modern, agile environment Identifies, prioritises, and executes tasks in the software development life cycle Develops tools and applications by producing clean, efficient code Communicates with clarity, precision, and influence, presenting complex information in a concise format that is audience appropriate LSEG PURPOSE AND VALUES Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Underpinning our purpose, our values of Integrity, Partnership, Excellence and Change set the standard for everything we do, every day. They guide the way we interact with each other, the partners we work with and our customers. Delivering on our purpose and living up to our values is a responsibility that we all share. To achieve our ambitions through a strong culture, People Leaders need to role model our Values and create the culture for everyone at LSEG to be at their best. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership , Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
HMH is a learning technology company committed to delivering connected solutions that engage learners, empower educators and improve student outcomes. As a leading provider of K–12 core curriculum, supplemental and intervention solutions, and professional learning services, HMH partners with educators and school districts to uncover solutions that unlock students’ potential and extend teachers’ capabilities. HMH serves more than 50 million students and 4 million educators in 150 countries. HMH Technology India Pvt. Ltd. is our technology and innovation arm in India focused on developing novel products and solutions using cutting-edge technology to better serve our clients globally. HMH aims to help employees grow as people, and not just as professionals. For more information, visit www.hmhco.com We are seeking a highly skilled Senior Infrastructure Engineer with expertise in Windows and Linux operating systems, Azure and AWS cloud platforms, and enterprise-grade infrastructure solutions. The ideal candidate will have a proven track record of designing, implementing, and managing complex infrastructure in a dynamic environment. Key Responsibilities Systems and Infrastructure Management: Design, deploy, and maintain Windows and Linux-based systems in both on-premises and cloud environments. Ensuring the reliability, security, and performance of the systems. Administer core services such as Active Directory, DHCP, DNS, and Group Policy. Cloud Infrastructure: Architect, implement, and manage Azure infrastructure components, including virtual machines, virtual networks, storage accounts, Azure AD(Entra ID), and enterprise app registrations. Manage AWS cloud resources, including EC2 instances, S3 buckets, RDS databases, and IAM roles. Virtualization and Storage Solutions: Utilize VMware and Windows hypervisor technologies to design, deploy, and manage virtualized environments for performance and scalability. Design and maintain enterprise backup and storage solutions, including Rubrik and PureStorage. Automation and Orchestration: Automate infrastructure provisioning, configuration, and deployment using tools like Terraform, Ansible, PowerShell, and Azure Automation. Monitoring and Performance Optimization: Monitor system performance using tools such as DataDog and LogicMonitor. Implement performance tuning measures to enhance infrastructure reliability and efficiency. Security and Compliance: Implement security best practices, access controls, and compliance measures in line with SOC2 and SOX standards. Collaboration and Innovation: Work with cross-functional teams to design scalable, secure, and highly available infrastructure solutions. Share expertise and mentor team members through documentation and training sessions. Disaster Recovery: Develop and maintain backup strategies and disaster recovery plans for critical systems and data. Continuous Learning: Stay current with emerging technologies and industry best practices related to infrastructure engineering, cloud computing, and virtualization. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of experience in infrastructure engineering with a focus on Windows and Linux operating systems and cloud platforms (Azure and AWS). Strong proficiency in Windows server administration, Linux system administration, and performance tuning. Hands-on experience with Azure services such as VMs, Azure Networking, Azure Storage, and enterprise app registrations. Familiarity with AWS services and tools, including EC2, S3, RDS, VPC, and CloudFormation. Expertise in VMware vSphere, ESXi, and vCenter. Experience with scripting and automation tools like PowerShell, Python, and Bash. Excellent communication and collaboration skills. Strong problem-solving abilities and adaptability in a fast-paced environment. Certifications (Preferred) Microsoft Certified: Azure Administrator Associate (AZ-104) or equivalent. AWS Certified Solutions Architect - Associate or equivalent. VMware Certified Professional (VCP). If you are a motivated self-starter with a passion for infrastructure engineering and a desire to work with cutting-edge technologies, we encourage you to apply for this exciting opportunity. HMH Technology Private Limited is an Equal Opportunity Employer and considers applicants for all positions without regard to race, colour, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. We are committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. For more information, visit https://careers.hmhco.com/ . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Technologies - MS SQL DBA Data Center Networking with VXLAN evpn fabric path, BGP, vpc, nexus Storage Netapp A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Show more Show less
Posted 3 weeks ago
2.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary We are seeking a skilled and motivated AWS Backup Specialist to design, implement, and manage AWS native backup solutions, including disaster recovery strategies, while supporting a range of AWS services such as EC2, S3, EFS, and RDS. The candidate will have hands-on experience in creating scalable, secure, and cost-efficient backup and DR strategies, along with expertise in AWS services. Experience with Azure and GCP is considered an added advantage. Key Responsibilities Backup Management and Disaster Recovery (Primary) Design and implement AWS native backup and restore solutions using AWS Backup, Amazon S3, RDS, EFS, and associated services. Develop and deploy Disaster Recovery (DR) strategies, ensuring compliance with RTO (Recovery Time Objectives) and RPO (Recovery Point Objectives). Configure automated backup policies for EC2 instances, RDS databases, EFS, and EBS volumes to meet organizational requirements. Implement cross-region and cross-account backup solutions for enhanced resilience and security. Periodically perform data restoration tests to validate the effectiveness of backup and DR strategies. Monitor and maintain the performance of backup solutions, addressing failures or inconsistencies proactively. Leverage AWS infrastructure automation tools (e.g., AWS CLI, CloudFormation, Terraform) to streamline backup and DR processes. Ensure backup and DR solutions adhere to compliance, governance, and security standards. AWS Services (Secondary) Manage and optimize AWS EC2 instances, including configuration, monitoring, and troubleshooting. Design, configure, and secure scalable S3 buckets for storage, backups, and lifecycle management. Handle RDS provisioning, backup configurations, scaling, and troubleshooting. Work with VPC, IAM, and CloudWatch to maintain secure and well monitored infrastructure. Required Skills and Qualifications2 to 5 years of experience in AWS environments with a focus on backup, recovery, and DR solutions. Strong expertise in AWS Backup, S3, EC2, RDS, and EFS. Proven experience in designing and implementing Disaster Recovery (DR) solutions in AWS. Familiarity with cross-region and cross-account backup architectures. Knowledge of infrastructure automation using AWS CLI, CloudFormation, or Terraform. Proficiency in monitoring tools (e.g., CloudWatch, AWS Config). Experience with scripting (e.g., Python, Bash, or PowerShell) for automation Multi-Cloud Platforms: Experience with Azure Backup and Recovery services, as well as GCPs backup and DR solutions. Cross Cloud DR Strategies: Knowledge of designing DR solutions across hybrid or multi cloud environments. Migration Experience: Experience in migrating backup and DR solutions from on premises to cloud or between cloud providers. Preferred QualificationsCertifications: AWS Solutions Architect Associate, AWS Certified SysOps Administrator, or similar. Additional Cloud Services: Familiarity with services like Lambda, DynamoDB, and EKS. Compliance Knowledge: Understanding of data encryption, compliance, and security standards. Soft Skills Strong analytical and problem-solving abilities. Excellent communication and teamwork skills. Ability to handle and prioritize multiple tasks in a dynamic environment. Show more Show less
Posted 3 weeks ago
16.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role OSTTRA India The Role: Enterprise Architect - Cloud & DevOps The Team: The OSTTRA Technology team is composed of Capital Markets Technology professionals, who build, support and protect the applications that operate our network. The technology landscape includes high-performance, high-volume applications as well as compute intensive applications, leveraging contemporary microservices, cloud-based architectures. The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets. What’s in it for you: The current objective is to identify individuals with 16+ years of experience who have high expertise, to join their existing team of experts who are spread across the world. This is your opportunity to start at the beginning and get the advantages of rapid early growth. This role is based out in Gurgaon and expected to work with different teams and colleagues across the globe. Responsibilities Contribute to policy framework, design, and engineering of “Cloud platform” architecture which enables efficient devops & infra ops; efficient organization, networking and security for workloads running across multiple accounts, cloud providers and on premises environments. It is required of this role to pursue a strong collaborative & partnership driven approach of working across security, infra, networking, compliance/risk and delivery stake holders and organizations. This role would define the Technology & Architecture for DevSecOps engineering lifecycle including the Observability of the client journeys and infra / application components across the stack. The role shall provide the directions and decision making for DevSecOps Toolchains, the engineering processes / methods, common / custom code libraries related to DevSec & infra ops, common logging, metrics, tracing standards, observability platforms etc. As a representative of Osttra Architecture council, this role shall provide for the definition of the cloud architectural strategy and the technology directions towards the choice of various cloud services appropriate for various workload patterns This role shall enable a decision-making process which is collaborative and participative; through suitable engagement with DevSecOps practitioner forum and community. This role shall represent these significant strategy progressions & decisions into Osttra Architecture Council and obtain the required buy-ins. Though Cloud is a significant focus area, but role shall have the corresponding responsibilities in the space of DevSecOps as mentioned above for the on prem workloads as well. What We’re Looking For Should have experience of articulating / contributing to the multi cloud / hybrid strategy for an enterprise from technical architecture, engineering, devops and infra ops point of view. Should have experience of articulating the directions / recommendations for workload placement strategies towards cloud services. Should have experience of providing guidelines to delivery and infrastructure teams towards the cloud organisational constucts like cloud organisations, policy management, accounts, VPC, cross cloud/account/on prem connectivities / gateways, security / IAM etc. Experience of couple of cloud providers preferably AWS and GCP shall be required. Candidate should have experience and expertise on DevSecOps and InfraOps engineering practices, methods, and tool chains. Candidate should have expertise and experience for determining / architecting the stacks that provide for robust alerting, logging, monitoring, observability practices and tools / platforms for an enterprise. Should have hands on experience towards diverse toolchains in this space like for source code management, test case / issue / quality management, requirements management, build / deployment, source code quality / coverage management, SAST / DAST tools, ELK/Splunk etc. And should have played a decision-making role in selection and design of such integrated tool chains. Candidate should have experience of architecting the distributed / microservices based systems or having provided for the platform / engineering components towards such architectures. It will be plus if a candidate has experience of integration architecture enablers and technologies like for API Gateways, Service Meshes, Service Bus, Messaging platforms, SSO platforms etc… A candidate with overall 17-20 years of experience in IT industry with track record of engineering / architecting for large scale systems and at least 4-5 years of decision making experience in DevSecOps and Cloud platform architecture / engineering space is likely to be suitable fit. The Location: Gurgaon, India About Company Statement OSTTRA is a market leader in derivatives post-trade processing, bringing innovation, expertise, processes and networks together to solve the post-trade challenges of global financial markets. OSTTRA operates cross-asset post-trade processing networks, providing a proven suite of Credit Risk, Trade Workflow and Optimisation services. Together these solutions streamline post-trade workflows, enabling firms to connect to counterparties and utilities, manage credit risk, reduce operational risk and optimise processing to drive post-trade efficiencies. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. These businesses have an exemplary track record of developing and supporting critical market infrastructure and bring together an established community of market participants comprising all trading relationships and paradigms, connected using powerful integration and transformation capabilities. About OSTTRA Candidates should note that OSTTRA is an independent firm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global provides recruitment services to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joining our global team of more than 1,200 post trade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. With an outstanding track record of developing and supporting critical market infrastructure, our combined network connects thousands of market participants to streamline end to end workflows - from trade capture at the point of execution, through portfolio optimization, to clearing and settlement. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets. Learn more at www.osttra.com. What’s In It For You? Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 312307 Posted On: 2025-03-22 Location: Gurgaon, Haryana, India Show more Show less
Posted 3 weeks ago
4.0 years
0 - 0 Lacs
Ahmedabad, Gujarat, India
On-site
Experience : 4.00 + years Salary : USD 1794-2297 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Ahmedabad) Placement Type : Full Time 6 months Project Based Employment(Payroll and Compliance to be managed by: Uplers Solutions Pvt. Ltd.) (*Note: This is a requirement for one of Uplers' client - US top Auto Inspection company) What do you need for this opportunity? Must have skills required: CloudWatch, Datadog, Serverless Architecture, Infrastructure as Code (IaC), Python/Bash/Powershell, Terraform/Terragrunt, Agile, AWS, Azure DevOps, Docker, Kubernetes US top Auto Inspection company is Looking for: Key Responsibilities: Design, implement, and manage CI/CD pipelines using Azure DevOps and other tools. Build, maintain, and scale infrastructure on AWS using Terraform and Terragrunt. Automate infrastructure provisioning, configuration management, and application deployment. Implement monitoring, alerting, and logging solutions to ensure system reliability and performance. Collaborate with software engineers, QA, and security teams to improve release velocity and system stability. Define and enforce best practices for infrastructure and deployment workflows. Support cloud migration, cost optimization, and performance tuning initiatives. Required Skills and Qualifications: Technical Skills: 3+ years of experience in a DevOps role. Strong hands-on experience with Azure DevOps (Pipelines, Repos, Artifacts). Deep understanding of AWS services like EC2, EBS, S3, IAM, VPC, RDS, EKS, etc. Proven experience with Terraform and Terragrunt for IaC and managing multi-environment setups. Proficiency with scripting (e.g., Bash, Python, PowerShell). Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Familiarity with observability tools (e.g., CloudWatch, ELK, Datadog etc). Strong understanding of networking, security, and system architecture in the cloud. Soft Skills Excellent problem-solving and analytical skills to identify root causes of issues and recommend solutions. Strong communication skills to collaborate with team members and articulate technical concepts to non-technical stakeholders. Ability to interpret business requirements and translate them into effective test strategies. Team player with a proactive attitude and the ability to work independently with minimal supervision. Nice to have: Certifications: AWS Certified DevOps Engineer, Kubernetes Administrator, etc. Experience with serverless architectures Exposure to Agile/Scrum development practices How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Primary Skills : Hands on with AWS cloud and its various services like VPC, EC2, RDS, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3, Lambda, IOT, CloudFormation etc. Strong experience in Apache/Tomcat, Linux and open source technology stacks Work with team to design, build and support a robust internal PAAS (platform as a service) capability For cloud environments, establishing new DevOps practices. Managing production infrastructure with Terraform, CloudFormation, etc. Good understanding of CI/CD tools, practices, process & applications like Ansible, Salt, Git, Jenkins, etc. Provision of critical system security by leveraging best practices and prolific cloud security solutions. Effectively communicate issues, risks and dependencies with project stakeholders, escalating where appropriate. Providing recommendations for architecture and process improvements. Good To Have Skills Knowledge of GCP and Azure cloud would be a plus. AWS Solution Architect / DevOps Engineer certification will be preferred Create infrastructure architectural design for IT Cloud deployments Roles And Responsibilities Understand client's infrastructure and various applications running on the servers. Write, Review, and Approve Automation and Infrastructure code. Work with Project teams to build automation pipelines for application environment deployments Brainstorm for new ideas and ways to improve development delivery. Able to present and communicate the architecture in a visual form. Evaluate potential single - points of failure within the Cloud infrastructure; Recommend risk - mitigation next steps to build out a high availability solution. Apply experience, technical knowledge and innovative techniques to resolve complex Operations challenges (ref:hirist.tech) Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Employee Platforms team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, equity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Proficient in coding with Java, Spring Boot, React JS, RESTful API implementation and micro-services architecture Experience in front end development, debugging, and maintaining code in a large corporate environment with Angular or Reacts JS, bootstrap or other UI frameworks Exposure to cloud technologies in AWS like VPC, Subnet, SG, Lambda, EC2, RDS, Route, Content Delivery, KMS, S3. Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security Use of APM tools, profiling tools and performance tuning process management, scalability Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Exposure to cloud technologies Ability to work independently and be self-motivated Fast learner and ability to solve complex problems ABOUT US Show more Show less
Posted 4 weeks ago
4.0 - 7.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
TCS has been a great pioneer in feeding the fire of young techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together. What we are looking for Role: Data Network Experience Range: 4 - 7 years Location: Indore Interview Mode: Friday Virtual Drive Must Have: Primary skills Experience on cisco ACI Deployment knowledge is must. should have strong working knowledge on cisco routers (including ISR & ASR), cisco catalyst & Nexus platform (Nexus 2k, 5k, 7k, 9k). Expertise in various DC technologies like VPC, VDC, FEX, OTV, LISP, VXLAN, fabric path. Must be Good with core routing protocols (BGP, OSPF, ISIS, MPLS), VRF, ACL, NXOS upgrade, L2/L3 troubleshooting. Experience on Legacy to SDN ACI Migration is preferred. Datacenter network with VxLAN configuration, VRF and Multi-tenant network segmentation. Understanding of ACI Multipod, Multisite, PBR will be an advantage. Knowledge on various DC Architecture DC-DR, Active-Active DC, etc. Good to Have: Technical knowledge of R&S. Essential: To be the Network SME for the Network Data and Security infrastructure. Maintain and monitor network performance. Take ownership of network incidents and troubleshoot from the first alert to the resolution, liaising with users, internal teams and vendors. Minimum Qualification: •15 years of full-time education •Minimum percentile of 50% in 10th, 12th, UG & PG (if applicable) Show more Show less
Posted 4 weeks ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
An Amazing Career Opportunity for Principal Software Engineer Location: Chennai, India (Hybrid) Job ID: 38064 Profile Summary: Principal Engineer with experience in building enterprise grade web applications. Candidates should be familiar with “The Twelve Factor App”, continuous delivery concepts and “Cloud Native Applications”. About HID Global HID Global powers the trusted identities of the world’s people, places and things. We make it possible for people to transact safely, work productively and travel freely. Our trusted identity solutions give people secure and convenient access to physical and digital places and connect things that can be accurately identified, verified and tracked digitally. Millions of people around the world use HID products and services to navigate their everyday lives, and over 2 billion things are connected through HID technology. We work with governments, educational institutions, hospitals, financial institutions, industrial businesses, and some of the most innovative companies on the planet. Headquartered in Austin, Texas, HID Global has over 3,000 employees worldwide and operates international offices that support more than 100 countries. HID Global® is an ASSA ABLOY Group brand. HID Global has is the trusted source for secure identity solutions for millions of customers and users around the world. In India, we have two Engineering Centre (Bangalore and Chennai) over 200+ Engineering Staff. Global Engineering Team is based in Chennai and one of the Business Unit Engineering team is based in Bangalore. Check us out: www.hidglobal.com and https://youtu.be/23km5H4K9Eo LinkedIn: www.linkedin.com/company/hidglobal/mycompany/ Qualifications To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodation may be made to enable individuals with disabilities to perform the essential functions. Roles & Responsibilities include the following (Other duties may be assigned): Responsible for designing, developing, and maintaining complex web applications, ensuring high performance and responsiveness. Involve in both front-end and back-end development, as well as mentoring junior engineers and collaborating with cross-functional teams. Building new Cloud Applications. Enhancing and supporting existing cloud-based products; designing and implementing new features; assessing and paying down technical debt Developing pure cloud, and-or hybrid cloud solutions Write test-driven, maintainable code and follow industry standards and web development best practices. Implement new features and maintain existing features of production pipeline. Work with Application / System Architects, Engineering managers, Product Owners and other engineers to assure accurate timelines and deliverables. Provide technical guidance and mentorship to a small group of highly skilled and motivated engineers. Performing peer code reviews and delivering feedback Collaborate with fellow engineers to find elegant, long-term solutions as well as creative quick fixes to problems. Following best practices and methodologies to produce desired software on time with top notch quality. Develop and maintain micro services deployed to Amazon AWS cloud with Docker. Be comfortable working with source control branching strategies Architect and implement robust, high-quality code across the full stack. Technical Requirements: Strong back-end development skills (Node.js, Python, Java, SQL/NoSQL databases). Experience in building a high-performance UI application in AWS Cloud platform Sound knowledge in Design Principles / Architectural Solutions across the full stack Experience in developing Cloud platform-based application. Experience in HTML, CSS, JavaScript, Angular.js, Angular, React.js, Bootstrap, CSS in JS, RxJS Experience in MERN & MEAN Stack Hands on experience with JavaScript Development on both client and server-side Understanding of Service Oriented Architectures and RESTful Web Service best practices Must be comfortable working on the Unix/Linux shell command line. Demonstrate the ability to reduce complex ideas and problems into clear concepts and solutions. Repo hosting services such as Bit bucket, GitHub Testing apps such as Mocha, chai, Jasmine Sound knowledge in Web Security Must have experience developing Software-as-a-Service (SaaS) applications Possess a passion for new technology and innovation. Experience in Agile ways of working Additionally, understanding of the following would be added advantage Experience with LLM frameworks (LangChain, Autogen, CrewAI), or ML monitoring tools AWS (IAM, RDS, Document DB, ECR, EC2, VPC, VPN, Subnets, ALB …) Kubernetes cluster hosting in Azure, Google Cloud, Digital Ocean etc. Server side design & programming or building large scale web applications using Java Python PyTorch Spark Education and/or Experience: An undergraduate degree from a recognized university in Computer Engineering, Computer Science, or equivalent, Desired master’s degree in computer science or Equivalent Must have 10 to 15 years of hands-on software development experience in modern server-side platform architecture, design and implementation or experience of creating the architecture, design, and implementation of a large web application. Must demonstrate depth of understanding in various technical aspects of software product engineering and possess a track record of significant technical contribution to more than 1 medium to large scale software product development effort. Deep analytical and problem-solving skills Broad exposure / experience across Mobile APP development, SOA, Contemporary Web channel solutions, large scale distributed systems design and implementation Demonstrated experience in Design and code for Secure, Scale, Speed, and reliability. A desire for learning and understanding the security discipline Why apply? Empowerment: You’ll work as part of a global team in a flexible work environment, learning and enhancing your expertise. We welcome an opportunity to meet you and learn about your unique talents, skills, and experiences. You don’t need to check all the boxes. If you have most of the skills and experience, we want you to apply. Innovation: You embrace challenges and want to drive change. We are open to ideas, including flexible work arrangements, job sharing or part-time job seekers. Integrity: You are results-orientated, reliable, and straightforward and value being treated accordingly. We want all our employees to be themselves, to feel appreciated and accepted. This opportunity may be open to flexible working arrangements. HID is an Equal Opportunity/Affirmative Action Employer – Minority/Female/Disability/Veteran/Gender Identity/Sexual Orientation. We make it easier for people to get where they want to go! On an average day, think of how many times you tap, twist, tag, push or swipe to get access, find information, connect with others or track something. HID technology is behind billions of interactions, in more than 100 countries. We help you create a verified, trusted identity that can get you where you need to go – without having to think about it. When you join our HID team, you’ll also be part of the ASSA ABLOY Group, the global leader in access solutions. You’ll have 63,000 colleagues in more than 70 different countries. We empower our people to build their career around their aspirations and our ambitions – supporting them with regular feedback, training, and development opportunities. Our colleagues think broadly about where they can make the most impact, and we encourage them to grow their role locally, regionally, or even internationally. As we welcome new people on board, it’s important to us to have diverse, inclusive teams, and we value different perspectives and experiences. Show more Show less
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has seen a growing demand for professionals with expertise in Virtual Private Cloud (VPC) technology. As businesses continue to migrate to cloud-based solutions, the need for skilled individuals who can design, implement, and manage VPC environments has never been higher. Job seekers in India looking to pursue a career in VPC have a range of opportunities available to them.
The average salary range for VPC professionals in India varies based on experience levels. Entry-level positions can expect to earn between ₹4-6 lakhs per annum, while experienced professionals can earn upwards of ₹15 lakhs per annum.
A typical career path in VPC jobs in India may start as a Junior VPC Engineer, progressing to roles such as VPC Administrator, VPC Architect, and finally reaching positions like VPC Manager or VPC Consultant.
In addition to expertise in VPC technology, professionals in this field are often expected to have knowledge of networking, security, cloud computing platforms, and infrastructure design.
As you prepare for VPC job interviews in India, make sure to brush up on your technical skills, stay updated with the latest trends in cloud computing, and showcase your problem-solving abilities. With dedication and perseverance, you can land a rewarding career in the thriving VPC job market in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
16951 Jobs | Dublin
Wipro
9154 Jobs | Bengaluru
EY
7414 Jobs | London
Amazon
5846 Jobs | Seattle,WA
Uplers
5736 Jobs | Ahmedabad
IBM
5617 Jobs | Armonk
Oracle
5448 Jobs | Redwood City
Accenture in India
5221 Jobs | Dublin 2
Capgemini
3420 Jobs | Paris,France
Tata Consultancy Services
3151 Jobs | Thane