Jobs
Interviews

17543 Terraform Jobs - Page 37

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You should have knowledge about the Media Advertising domain and experience in building DevOps solutions for it. Your role will involve facilitating the development process and operations, identifying design flaws and performance bottlenecks, and building suitable DevOps channels across the organization. You should be capable of establishing continuous build environments to speed up software development and designing and delivering best practices across the organization. You will guide the development team for end-to-end solutioning from a Cloud point of view and should have experience in Cloud Cost Optimization. Your technical skillset should include but not be limited to GCP/Azure/AWS, Terraform/Circle CI/GitLab CI, Jenkins, and Monitoring Tools like Prometheus/Grafana/Dynatrace/New Relic/App Dynamics.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

At Broadridge, we have created a culture that aims to empower individuals to achieve more. If you are enthusiastic about advancing your career while supporting others in their journey, we invite you to become a part of the Broadridge team. Your role involves providing expert guidance for the implementation and advancement of secure cloud and container architectures, controls, and best practices across various cloud services such as IaaS, PaaS, SaaS, and hybrid configurations. You will collaborate closely with developers, system administrators, and IT management to drive proactive solutions. Additionally, you will be responsible for identifying, suggesting, and assessing new technology options to enhance process efficiency, automation, security, visibility, developer support, and operational streamlining in cloud and container environments. Furthermore, you will contribute to the enhancement of continuous monitoring solutions to verify systems against security standards and address policy breaches. Analyzing the latest attacker tactics and implementing strategies to mitigate associated risks is also a key aspect of your role. You will provide insights into the design and implementation of automated security solutions and work closely with product and development teams to ensure alignment with company directives and objectives. In terms of technical skills, you should have demonstrated expertise in cloud-native architectures, microservices, and operational best practices related to cloud and container orchestration. Experience in integrating enterprise-scale security solutions in AWS and/or Azure, including user, security, and networking configurations, is essential. Proficiency in full-stack cloud automation using tools like Git, Terraform, Ansible, and Jenkins is required. Previous programming experience is necessary, with a preference for familiarity with Python. A Bachelor's degree or higher in Computer Science, Engineering, or a related field, or equivalent certifications and practical experience, is expected. You should have at least 5 years of experience in network, application, or infrastructure security. A solid understanding of IT Risk Management, Security Policies and Procedures, Internal Audit, and Compliance Standards is vital, along with familiarity with SOC, FFIEC, CSA, and FedRAMP. Experience in aligning security programs with benchmarks and standards such as NIST, CIS, FIPS, PCI DSS, HIPAA, and FIPS 140-2 is advantageous. Regarding soft skills, excellent communication skills in both oral and written English are crucial. You should be able to articulate complex ideas effectively to ensure clear direction and outcomes. Adaptability to changing technology landscapes and requirements is also a key attribute for this role.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for maintaining and optimizing the RedHat OpenShift Container Platform. Your expertise in Kubernetes and RedHat Linux administration will be crucial for ensuring the smooth operation of the platform. Additionally, you will be expected to work with Docker or similar container technologies to support the deployment of applications. Experience with GCP cloud PAAS/IAAS technologies will be beneficial for this role, as you will be working with infrastructure in the cloud environment. Familiarity with infrastructure as code technologies like Terraform and the Go Programming language will also be an advantage. Your responsibilities will include infrastructure provisioning, monitoring, and day-to-day operations to support the overall stability and performance of the systems. The ideal candidate will have 3-5 years of relevant experience and should be able to work effectively with a notice period of immediate - maximum 30 days.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

Remote

Company Description BugRaid.AI harnesses advanced AIOps and AI bots to proactively manage and respond to incidents, revolutionizing the entire process. Our innovative solution integrates comprehensive incident analysis with real-time response capabilities, distinguishing us within the industry. We expedite resolution by swiftly identifying and addressing issues to minimize downtime and improve efficiency. Our platform is engineered for scalability and flexibility, providing in-depth insights through comprehensive analytics to support informed decision-making. Role Description This is a full-time remote position for a Senior AI Engineer. The Senior AI Engineer will be responsible for designing, developing, and optimizing AI systems focused on AWS, Large Language Models (LLMs), Generative AI (GenAI), and AIOps. Responsibilities include building and deploying AI models, analyzing their performance, and integrating AI solutions with existing systems. Responsibilities - Design Agentic Systems: Develop and enhance multi-step reasoning agents that operate with live infrastructure and observability data (logs, metrics, traces). - Develop Real-World GenAI Applications: Incorporate LLMs into production environments demanding low latency and high availability. - Prompt Engineering & Orchestration: Create tools for function calling, agent workflows, and dynamic prompt tuning. - Conduct Experiments & Deployment: Rapidly prototype, evaluate, and refine LLMs, agent workflows, and evaluation mechanisms. - Monitor & Assess: Establish observability and continuous evaluation pipelines to ensure the reliability of AI agents in practical scenarios. - Foster Collaboration: Work closely with the backend, infrastructure, and product teams to embed AI agents within the core BugRaid system. Qualifications - Over 3+ years of experience in software engineering, preferably within AI/ML, GenAI, or data engineering roles. - Proficiency in Python and its machine learning/AI ecosystem. - Hands-on experience with AWS, LLMs, or agent frameworks such as LangChain, CrewAI, etc.. - Familiarity with observability data and incident troubleshooting workflows. - Practical experience in developing distributed systems, data pipelines, or real-time machine learning infrastructure. - Demonstrated initiative and capability to transition ideas from prototypes to production environments. Additional Considerations: - Experience with model fine-tuning, RLHF, or contributions to open-source AI agents. - Knowledge of AWS, Azure, Terraform, Kubernetes, or platforms for ML orchestration. - Contribution to or development of large-scale GenAI platforms or systems ensuring AI reliability. Perquisites & Benefits - Culture emphasizing remote work (within India), supported by team hubs located in Hyderabad and Bangalore. - Competitive startup compensation complemented by generous Employee Stock Ownership Plans (ESOPs) – emphasizing ownership. - Collaboration with passionate engineers, AI specialists, and DevOps leaders from leading organizations. - Significant opportunity to create impactful AI products that transform global infrastructure operations.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At Zerodha Fund House, you'll have the incredible opportunity to join a team of world-class engineers, designers, and finance professionals with diverse backgrounds and skills. As a DevOps team member, you'll play a vital role in building the next generation of investment products for millennials. We're looking for a passionate and proactive undergrad, preferably graduating in 2024 with experience in DevOps. You'll be working on cutting-edge technologies to deliver a delightful investing experience to our users. If you're eager to learn and grow, and you're excited to be part of a team that's changing the way people invest, then we encourage you to apply. Responsibilities Help the team research and evaluate new DevOps tools and technologies to improve our development and deployment process. Monitor and troubleshoot infrastructure and applications. Document and share DevOps knowledge with the team. Contribute to the DevOps community by writing blog posts, giving presentations, and participating in open source projects. Requirements Strong Linux and networking fundamentals Web development concepts - server architecture (cloud computing). Basic knowledge of DevOps tools and technologies, such as GitOps, CI/CD Good to-have Interest (and/or experience) in the financial/stock market space. Familiarity with tools like ArgoCD, ArgoWorkflow, AWS CDK, CDK8s, Terraform, Helm and Cloudformation, AWS Infra (EC2, ELB, ALB, S3, VPC, Cloudfront, Route53). Familiarity with databases - (MongoDB, Postgres, Redis), Scripting (bash, python). Familiarity with AWS Infra (EC2, ELB, ALB, S3, VPC, Cloudfront, Route53).

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Dataction Dataction is a new age technology services firm that offers best in class information technology, analytics, and consulting services to renowned international companies. Dataction was established in 2010 and has grown rapidly over the last decade. Dataction has built a reputation for providing differentiated and reliable services to a wide range of customers across multiple sectors. At Dataction we connect every dot and reimagine every business process. Our lean, agile, and consultative approach towards problem solving and execution, helps our client achieve sustainable growth and secure a profitable business, while safeguarding a viable future. Our people are committed, courageous, and unafraid of pushing boundaries. Job Summary We are seeking a highly skilled and proactive Sr. Engineer Data Support to join our Data Engineering support team. This role is crucial in ensuring the enhancements, bug fixes, smooth operation, stability, and performance of our data platforms and pipelines. The ideal candidate will bring a strong mix of technical expertise, problem-solving ability, and cross-functional communication skills to manage live issues, automate workflows, and optimize data operations across cloud platforms. This role requires a strong technical background in Snowflake, DBT, SQL, AWS and various data integration tools. Responsibilities Work closely with Engineering and Business teams to fix live issues and implement enhancements. Analyze and investigate data related issues, troubleshoot discrepancies, and provide timely resolutions. Automate manual processes and enhance operational efficiency through scripting and tool integrations. Perform daily health checks and proactive monitoring of the platform to ensure system stability and performance. Manage incident response, including issue triage, root cause analysis, and resolution coordination. Maintain business communications, providing updates on platform status, incidents, and resolutions. Oversee platform administration and maintenance across AWS, Snowflake, DBT, and related technologies. Work with data pipelines and tools like HVR, Stitch, Fivetran, and Terraform to ensure smooth data operations. Continuously improve monitoring, alerting, and operational workflows to minimize downtime and enhance performance. Qualifications, Skills And Experience 5 + years of relevant experience in data operations. Bachelor's degree in Computer Science, Information Systems, or a related field. Experience with AWS, Snowflake, DBT, HVR, Stitch, Fivetran, and Terraform for data platform management. Strong analytical and problem solving skills to diagnose and resolve issues. Experience in incident management, system monitoring, and automation. Proficiency in scripting SQL, Python to support automation and data analysis. Ability to communicate effectively with technical and nontechnical stakeholders. Why should you join Dataction ? Fairness, Meritocracy, Empowerment, And Opportunity Are Pillars Of Our Work Culture. In Addition To a Competitive Salary, You Can Look Forward To Great work-life balance through hybrid work arrangement. Company funded skill enhancement and training. Exciting reward and recognition programme. Opportunity to bond with colleagues through exciting employee engagement initiatives. Great on the job learning opportunity through involvement in new product/ ideation teams. 60 minutes with the CEO each quarter to pick his brains on any topic of your choice. (ref:hirist.tech)

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Title: Cloud Architect Company Overview: At Codvo, software and people transformations go hand-in-hand. We are a global empathy-led technology services company. Product innovation and mature software engineering are part of our core DNA. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day. We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results. The Role As a Cloud Architect at Codvo, you will be a key leader in guiding our enterprise clients through their cloud transformation journeys. You will be responsible for designing and implementing secure, scalable, and resilient cloud infrastructure and application architectures. You will leverage your deep expertise in AWS, Azure, or GCP to solve complex technical challenges and enable our clients to achieve their business objectives through the power of the cloud. Key Responsibilities Architectural Design: Design and present high-level and low-level cloud architecture diagrams and documentation for infrastructure, networking, security, and application deployments. Cloud Strategy & Advisory: Act as a trusted advisor to clients on cloud strategy, including workload migration approaches (e.g., re-host, re-platform, re-factor), cloud-native development best practices, and cost optimization (FinOps). Infrastructure as Code (IaC): Lead the implementation of cloud environments using IaC tools like Terraform or CloudFormation, promoting automation and repeatability. Security & Compliance: Design and implement robust security controls, identity and access management (IAM) policies, and governance frameworks to meet enterprise security and compliance requirements. DevOps & Automation: Architect CI/CD pipelines and promote a culture of automation for application deployment and infrastructure management, enabling developer velocity. Technical Leadership: Act as the senior technical expert on cloud technologies for both the client and internal Codvo teams, providing mentorship and hands-on guidance. Required Qualifications & Skills Experience: 10+ years in IT/Software Engineering with at least 4+ years in a dedicated, hands-on Cloud Architect role. Deep Cloud Expertise: Expert-level knowledge and implementation experience with one or more major cloud platforms: AWS, Microsoft Azure, or Google Cloud Platform. Infrastructure as Code (IaC): Strong hands-on proficiency with Terraform, CloudFormation, or Bicep. Networking & Security: Solid understanding of cloud networking concepts (VPCs/VNets, subnets, routing, firewalls, load balancers) and security best practices (e.g., defense in depth, zero trust). Containerization & Orchestration: Deep proficiency with Docker and container orchestration platforms like Kubernetes (EKS, AKS, GKE). AI/ML Acumen: Good understanding of modern infrastructure for AI applications, including Retrieval Augmented Generation (RAG), and agentic solutions. Experience designing infrastructure that incorporate AI/ML systems is crucial. Communication & Consulting: Strong ability to communicate technical designs and strategies effectively to both technical teams and business stakeholders in a consultative manner. Location - Remote(India) Time - 2:30-11:30PM

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description You are a person whose day-to-day job would involve writing Python scripts, making infrastructure changes via terraform. Strong understanding of Linux. We breathe on Linux. Must have a knack of automating manual efforts. If you have to do something more than 3 times manually you are the person who would hate this. Engage with cross-functional teams in design, development and implementation of DevOps capabilities related to enabling higher developer productivity, environment monitoring and self-healing. Should have good knowledge in AWS. Excellent troubleshooting skills as it would be part of day to day work. Working knowledge of Kubernetes and Docker (any container technology) in production. Understanding of CI/CD pipeline of how it works and how it can be implemented. Should have a knack to identify performance bottlenecks and maturing the monitoring and alerting systems. Good knowledge of monitoring and Logging tools like Grafana/ Prometheus / ELK / Sumologic / NewRelic. Ability to work on-call and respond to production failures. Should be self-motivated as most of the time the person has to drive a project or find performance issues or do POC's independently. You are a person who will be happy to write articles about your leanings and share within the company and in the community. You might be a person who is ready to challenge the architecture for longer performance gains. You know how SSL/TCP-IP/VPN/CDN/DNS/LoadBalancing works. Essential Skills B.E./B.Tech in CS/IT or equivalent technical qualifications. Knowledge of Amazon web services (AWS) would be a big plus. Experience in administering/managing Windows or LINUX systems. Hands-on experience in AWS, Jenkins, Git, Chef. Experience with the various application servers (Apache, Nginx , varnish etc). Experience in Python, Chef & Terraform, Kubernetes, dockers. Experience installing, upgrading, and maintaining application servers on the LINUX platform (ref:hirist.tech)

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

Founded in 2004, NetBrain is the leader in no-code network automation. Its ground-breaking Next-Gen platform provides IT operations teams with the ability to scale their hybrid multi-cloud connected networks by automating the processes associated with Diagnostic Troubleshooting, Outage Prevention, and Protected Change Management. Today, over 2,500 of the world's largest enterprises and managed services providers leverage NetBrain's platform. We are seeking a Senior System Engineer with experience in IT infrastructure to join our team. The ideal candidate will have at least 8 years of experience with IaC tools such as Terraform and Ansible and be able to centrally manage servers and IT workflows. As a System Engineer, you will play a key role in driving the adoption of our IT infrastructure and ensuring its smooth operation. You will work closely with other members of the IT team to design, implement, and manage the installation, configuration, and troubleshooting of applications running on Windows and Linux servers. Responsibilities: - Centrally manage a broad fleet of both on-prem and cloud-hosted Windows and Linux servers of varying versions and flavors using IaC tools such as Ansible or Terraform. - Centrally manage virtualization infrastructure using Vmware vcenter, ESXi, NSX-T, vRealize, Site Recovery Manager, Lifecycle Manager, etc, providing creative solutions to the most complex problems. - Apply wide-ranging experience to assess the most complex situations and provide IT server/system support to engineering groups in the US, Canada, and China. - Generate or build complex tools to streamline and simplify the creation, configuration, and decommission of server assets across NetBrain. - Oversee the application of existing processes and develop new processes across IT systems for backup & restoring, patching, logging, & identity and access management. - Work with the security team to identify and remediate the most complex system vulnerabilities. Requirements: - Bachelor's or Master's degree in computer science, software engineering, or a related field. - 8+ years of experience in IT Systems Management. - 8+ years of experience with server administration of VMware ESXi, Hyper-V, Windows Servers, Linux Servers, and hands-on experience deploying Rackmount servers. - 8+ years of experience with patch management tools for OS and third-party apps. - 8+ years of experience with server vulnerability management and remediation processes. - 8+ years of experience of networking principals and protocols, such as DNS, TCP/IP, HTTP, DHCP, server virtualization, and Active Directory integration. - Experience with Azure Active Directory/Microsoft Entra ID is a plus, including directory synchronization, single sign-on (SSO), identity governance, etc. - Experience with public cloud technologies is a plus, such as GCP, AWS, and/or Azure. - Ability to work on-site for at least 3 days a week. At NetBrain, we value innovation, collaboration, and customer-focus and we can only live those values through a culture that encourages diversity, equity, inclusion, and belonging. NetBrain is proud to be an Equal Opportunity Employer. We hope that you will apply to a job here that excites you no matter how you identify. NetBrain invites all interested and qualified candidates to apply for employment opportunities. If you have a disability that prevents or limits your ability to use or access the site, or if you require any other accommodation in the application process due to a disability, you may request a reasonable accommodation.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Bachelor’s/Master’s degree in Computer Science, Information Technology or related field 5-7 years of experience in a DevOps role Strong understanding of the SDLC and experience with working on fully Agile teams Proven experience in coding & scripting DevOps, Ant/Maven, Groovy, Terraform, Shell Scripting, and Helm Chart skills. Working experience with IaC tools like Terraform, CloudFormation, or ARM templates Strong experience with cloud computing platforms (e.g. Oracle Cloud (OCI), AWS, Azure, Google Cloud) Experience with containerization technologies (e.g. Docker, Kubernetes/EKS/AKS) Experience with continuous integration and delivery tools (e.g. Jenkins, GitLab CI/CD) Kubernetes - Experience with managing Kubernetes clusters and using kubectl for managing helm chart deployments, ingress services, and troubleshooting pods. OS Services – Basic Knowledge to Manage, configuring, and troubleshooting Linux operating system issues (Linux), storage (block and object), networking (VPCs, proxies, and CDNs) Monitoring and instrumentation - Implement metrics in Prometheus, Grafana, Elastic, log management and related systems, and Slack/PagerDuty/Sentry integrations Strong know-how of modern distributed version control systems (e.g. Git, GitHub, GitLab etc) Strong troubleshooting and problem-solving skills, and ability to work well under pressure Excellent communication and collaboration skills, and ability to lead and mentor junior team members Career Level - IC3 Responsibilities Design, implement, and maintain automated build, deployment, and testing systems Experience in Taking Application Code and Third Party Products and Building Fully Automated Pipelines for Java Applications to Build, Test and Deploy Complex Systems for delivery in Cloud. Ability to Containerize an Application i.e. creating Docker Containers and Pushing them to an Artifact Repository for deployment on containerization solutions with OKE (Oracle container Engine for Kubernetes) using Helm Charts. Lead efforts to optimize the build and deployment processes for high-volume, high-availability systems Monitor production systems to ensure high availability and performance, and proactively identify and resolve issues Support and Troubleshoot Cloud Deployment and Environment Issues Create and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD Continuously improve the scalability and security of our systems, and lead efforts to implement best practices Participate in the design and implementation of new features and applications, and provide guidance on best practices for deployment and operations Work with security team to ensure compliance with industry and company standards, and implement security measures to protect against threats Keep up-to-date with emerging trends and technologies in DevOps, and make recommendations for improvement Lead and mentor junior DevOps engineers and collaborate with cross-functional teams to ensure successful delivery of projects Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. As a member of the software engineering division, you will analyze and integrate external customer specifications. Specify, design and implement modest changes to existing software architecture. Build new products and development tools. Build and execute unit tests and unit test plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering to discuss major changes to functionality. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 6+ years of software engineering or related experience. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking an experienced and highly skilled PostgreSQL Database Administrator (DBA) to manage, maintain, and optimize our PostgreSQL database systems. The ideal candidate will be responsible for ensuring database availability, security, performance, and scalability. You Will Work Closely With Application Developers, System Engineers, And DevOps Teams To Provide High-quality Data Solutions And Troubleshoot Complex Issues In a Mission-critical Install, configure, and upgrade PostgreSQL databases in high-availability environments Design and implement database architecture, including replication, partitioning, and sharding Perform daily database administration tasks including backups, restores, monitoring, and tuning Optimize queries, indexes, and overall performance of PostgreSQL systems Ensure high availability and disaster recovery by configuring replication (Streaming, Logical) and backup solutions (pgBackRest, Barman, WAL archiving) Implement and maintain security policies, user access control, and encryption Monitor database health using tools such as pgAdmin, Nagios, Zabbix, or other monitoring tools Troubleshoot database-related issues in development, test, and production environments Automate routine tasks using shell scripting, Python, or Ansible Work with DevOps/SRE teams to integrate PostgreSQL into CI/CD pipelines and cloud platforms Technical Expertise Proven experience with PostgreSQL 11+ (latest version experience preferred) Strong knowledge of SQL, PL/pgSQL, database objects, and data types Experience with PostgreSQL replication: streaming, logical, and hot standby Deep understanding of VACUUM, ANALYZE, autovacuum configuration and tuning Knowledge of PostGIS, pgBouncer, and pg_stat_statements is a plus Tuning & Monitoring Query optimization and slow query analysis using EXPLAIN and ANALYZE Experience with database performance monitoring tools (e.g., pg_stat_activity, pgBadger) Strong debugging and troubleshooting of locking, deadlocks, and resource contention issues DevOps Integration PostgreSQL experience on AWS RDS, Azure Database for PostgreSQL, or GCP Cloud SQL Familiarity with IaC tools like Terraform or CloudFormation is a plus Experience with CI/CD integration and containerization tools (Docker, Kubernetes) for DB deployment Compliance Implement role-based access control, data masking, and audit logging Ensure compliance with standards like GDPR, ISO 27001, or SOC 2 where applicable Education Bachelors or Masters degree in Computer Science, Information Technology, or a related field Experience Minimum 4+ years of experience in PostgreSQL database administration PostgreSQL certification (e.g., EDB Certified Associate/Professional) is a plus Experience in 24x7 production environments supporting high-volume transactions Desirable Skills Exposure to multi-tenant architectures Experience migrating from Oracle/MySQL to PostgreSQL Knowledge of NoSQL systems (MongoDB, Redis) is a plus Understanding of data warehousing and ETL processes (ref:hirist.tech)

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

The ideal candidate for this role should have strong skills in AWS EMR, EC2, AWS S3, Cloud Formation Template, Batch data, and AWS Code Pipeline services. It would be an added advantage to have experience with EKS. As this is a hands-on role, the candidate will be expected to have good administrative knowledge of AWS EMR, EC2, AWS S3, Cloud Formation Template, and Batch data. Responsibilities include managing and deploying EMR Clusters, with a solid understanding of AWS account and IAM. The candidate should also have experience in administrative tasks related to EMR Persistent Cluster and Transient Cluster. It is essential for the candidate to possess a good understanding of AWS Cloud Formation, cluster setup, and AWS network. Hands-on experience with Infrastructure as Code for Deployment tools like Terraform is highly desirable. Additionally, experience in AWS health monitoring and optimization is required. Knowledge of Hadoop and Big Data will be considered as an added advantage for this position.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Corporate Technology, you will serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. Your responsibilities include executing software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. You will be creating secure and high-quality production code, maintaining algorithms that run synchronously with appropriate systems, producing architecture and design artifacts for complex applications, and ensuring design constraints are met by software code development. In this role, you will gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. You will proactively identify hidden problems and patterns in data and use these insights to drive improvements to coding hygiene and system architecture. Additionally, you will contribute to software engineering communities of practice and events that explore new and emerging technologies, while fostering a team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills include formal training or certification on software engineering concepts and 3+ years of applied experience. You should have hands-on practical experience in system design, application development, testing, and operational stability, as well as proficiency in coding in Java, Microservices, Containers/Kubernetes, AWS/Pivotal Cloud Foundry, Terraform, Kafka, and SQL. Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages is essential. A solid understanding of the Software Development Life Cycle, agile methodologies such as CI/CD, Application Resiliency, and Security, and demonstrated knowledge of software applications and technical processes within a technical discipline are also required. Preferred qualifications, capabilities, and skills include familiarity with modern front-end technologies, micro-services with DDD (domain-driven design), and exposure to cloud technologies.,

Posted 1 week ago

Apply

0 years

0 Lacs

Patna, Bihar, India

Remote

Are you ready to transform chaos into order and take control of multi-stack infrastructures? What You Will Be Doing Strategically plan and execute complex infrastructure migrations from legacy systems to a streamlined AWS cloud environment. Innovate monitoring and automation processes to ensure seamless software deployments and operational workflows. Engage in system monitoring, incident response, and database configurations to enhance performance and cost-efficiency. What You Won’t Be Doing Endlessly updating Jira or attending status meetings - your focus is on driving solutions. Stagnating with outdated systems - you have the power to enhance and optimize. Waiting for bureaucratic approvals - your expertise grants you the authority for immediate action. Confined to narrow technical expertise - utilize your broad skillset across multiple technologies. Struggling for budget on critical improvements - your role supports infrastructure investment. Senior DevOps Engineer Key Responsibilities Ensure the reliability and standardization of cloud infrastructure, driving efficiencies and optimizations across our diverse product portfolio. Basic Requirements Extensive AWS infrastructure expertise, as it is our core platform. Proficient programming skills in Python or JavaScript for developing automation tools. Proven experience in managing and migrating production databases across various engines (MySQL, Postgres, Oracle, MS-SQL). Advanced proficiency in Docker/Kubernetes. Skilled in infrastructure automation using tools like Terraform, Ansible, or CloudFormation. Expertise in Linux systems administration. About Trilogy Hundreds of software businesses run on the Trilogy Business Platform. For three decades, Trilogy has been known for 3 things: Relentlessly seeking top talent, Innovating new technology, and incubating new businesses. Our technological innovation is spearheaded by a passion for simple customer-facing designs. Our incubation of new businesses ranges from entirely new moon-shot ideas to rearchitecting existing projects for today's modern cloud-based stack. Trilogy is a place where you can be surrounded with great people, be proud of doing great work, and grow your career by leaps and bounds. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $50 USD/hour, which equates to $100,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5236-IN-Patna-SeniorDevOpsEn.003

Posted 1 week ago

Apply

3.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be joining the Oracle Cloud Infrastructure (OCI) Reliability team, a group of passionate engineers dedicated to ensuring the highest level of availability and reliability for OCI services and customers. Your role will involve designing and building complex, highly technical products and services in the Cloud from the ground up. As a technical leader, you will play a crucial role in defining the future of services used daily by both customers and internal teams. Your responsibilities will include defining the vision and technical strategy for the products, translating business requirements into technical specifications, architecting, designing, developing, and troubleshooting customer-facing cloud services, and automating tasks to ensure continuous delivery and availability with minimal human intervention. You will drive the development of performant, scalable solutions and maintain development and production infrastructure to uphold operational excellence. To excel in this role, you should have at least 9 years of experience in the software industry, with a minimum of 3 years as a senior developer/technical leader. A Bachelor's Degree or Master's in Computer Science or equivalent education is required. You should have a deep understanding of the product development lifecycle, including working closely with product management, writing technical specifications, architecting and designing services, developing code, managing releases, and operations. Strong communication, organization, and interpersonal skills are essential, along with the ability to work in complex, rapidly evolving software development environments. You should have expertise in Java, Python, or similar modern programming languages, microservice-based architectures, distributed systems, SQL and NoSQL databases, REST APIs, and Cloud technologies. Experience with CI/CD processes and tools, cloud computing, web development, system monitoring, automation, and incident management tools is highly desirable. In this role, you will lead, mentor, and coach junior team members, provide technical guidance and feedback to stakeholders, and contribute to product roadmaps by identifying areas for improvement. You will be expected to maintain a high standard of engineering quality and best practices, driving continuous innovation and improvements in technology and operations to ensure a solid security posture.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

indore, madhya pradesh

On-site

The AM3 Group is looking for a highly skilled Senior Java Developer with a strong background in AWS cloud services to be a part of our dynamic team. In this role, you will have the opportunity to create and manage modern, scalable, and cloud-native applications using Java (up to Java 17), Spring Boot, Angular, and a comprehensive range of AWS tools. As a Senior Java Developer at AM3 Group, your responsibilities will include developing full-stack applications utilizing Java, Spring Boot, Angular, and RESTful APIs. You will be involved in building and deploying cloud-native solutions with AWS services such as EC2, S3, Lambda, RDS, DynamoDB, and API Gateway. Additionally, you will be tasked with designing and implementing microservices architectures for enhanced scalability and resilience. Your role will also entail creating and maintaining CI/CD pipelines using tools like Jenkins, GitHub Actions, AWS CodePipeline, and Terraform, as well as containerizing applications with Docker and managing them through Kubernetes (EKS). Monitoring and optimizing performance using AWS CloudWatch, X-Ray, and the ELK Stack, working with Apache Kafka and Redis for real-time event-driven systems, and conducting unit/integration testing with JUnit, Mockito, Jasmine, and API testing via Postman are also key aspects of the role. Collaboration within Agile/Scrum teams to deliver features in iterative sprints is an essential part of your responsibilities. The ideal candidate should possess a minimum of 8 years of Java development experience with a strong understanding of Java 8/11/17, expertise in Spring Boot, Hibernate, and microservices, as well as solid experience with AWS including infrastructure and serverless (Lambda, EC2, S3, etc.). Frontend development exposure with Angular (v212), JavaScript, and Bootstrap, hands-on experience with CI/CD, GitHub Actions, Jenkins, and Terraform, familiarity with SQL (MySQL, Oracle) and NoSQL (DynamoDB, MongoDB), and knowledge of SQS, JMS, and event-driven architecture are required skills. Additionally, familiarity with DevSecOps and cloud security best practices is essential. Preferred qualifications include experience with serverless frameworks (AWS Lambda), familiarity with React.js, Node.js, or Kotlin, and exposure to Big Data, Apache Spark, or machine learning pipelines. Join our team at AM3 Group to work on challenging and high-impact cloud projects, benefit from competitive compensation and benefits, enjoy a flexible work environment, be part of a culture of innovation and continuous learning, and gain global exposure through cross-functional collaboration. Apply now to be a part of a future-ready team that is shaping cloud-native enterprise solutions! For any questions or referrals, please contact us at careers@am3group.com. To learn more about us, visit our website at https://am3group.com/.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

indore, madhya pradesh

On-site

As a Software Development Engineer with expertise in Java and strong AWS experience, you will be responsible for designing, developing, and deploying robust cloud-native applications. With a minimum of 5+ years of Java development experience, including Java 8/11/17 features, you will utilize your skills in Spring Boot, Angular, and RESTful APIs to build full-stack Java applications. Additionally, you will work on AWS-based cloud solutions, leveraging services such as EC2, S3, Lambda, DynamoDB, API Gateway, and CloudFormation. Your key responsibilities will include developing and optimizing microservices architectures for high availability and scalability, implementing CI/CD pipelines using Jenkins, AWS CodePipeline, and Terraform, and collaborating in an Agile/Scrum environment to drive innovation and efficiency. You will also work with Docker & Kubernetes (EKS) for containerized applications, optimize system performance using monitoring tools like AWS CloudWatch, X-Ray, and ELK Stack, and ensure robust testing and quality assurance using tools like JUnit, Mockito, Jasmine, and Postman. You should have a strong understanding of AWS cloud services and infrastructure, experience with CI/CD pipelines, GitHub Actions, Terraform, and Jenkins, and proficiency in front-end development using Angular, JavaScript, and Bootstrap. Hands-on experience with SQL (Oracle, MySQL) and NoSQL (MongoDB, DynamoDB) databases, as well as knowledge of APIs, messaging systems (SQS, JMS), and event-driven architectures, will be essential for this role. Preferred qualifications include experience in serverless architecture using AWS Lambda, familiarity with React.js, Node.js, and Kotlin, and a background in machine learning, data analytics, or big data processing (Apache Spark, Storm). If you are looking to work in a dynamic environment where you can showcase your Java development skills and AWS expertise, this position offers an exciting opportunity to contribute to the development of cutting-edge cloud-native applications.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineering Manager at o9 Solutions, you will have the opportunity to work for an AI-based Unicorn that is recognized as one of the fastest-growing companies on the Inc. 5000 list. Your role will involve leading a team of talented SRE professionals to maintain and execute organizational policies and procedures for change management, configuration management, release and deployment management, service monitoring, problem management, and support the o9 Digital Brain Platform across major cloud providers like AWS, GCP, Azure, and Samsung Cloud. In this role, you will be empowered to continuously challenge the status quo and implement innovative ideas to create value for o9 clients. Your responsibilities will include deploying, maintaining, and supporting o9 digital Brain SaaS environments on all major clouds, managing the SRE team, hiring and growing SRE talent, leading, planning, building, configuring, testing, and deploying software and systems to manage platform infrastructure and applications. You will collaborate with internal and external customers to manage o9 platform deployment, maintenance, and support needs, improve reliability, quality, cost, time-to-deploy, and time-to-upgrade, monitor, measure, and optimize system performance, provide on-call support on rotation/shift basis, analyze and approve code and configuration changes, and work with teams globally across different time zones. To qualify for this role, you should have a Bachelor's degree in computer science, Software Engineering, Information Technology, Industrial Engineering, or Engineering Management, along with at least 9 years of experience building and leading high-performing diverse teams as an SRE manager or DevOps Manager. You should also have experience in cloud administration and Kubernetes certification. Additionally, you should have 5+ years of experience in an SRE role, deploying and maintaining applications, performance tuning, conducting application upgrades, patches, supporting continuous integration and deployment tooling, and experience with cloud platforms like AWS, Azure, or GCP, as well as Docker, Kubernetes, and supporting big data platforms. You should possess strong skills in operating system concepts, Linux, troubleshooting, automation, cloud, Jenkins, Ansible, Terraform, ArgoCD, and administration of databases. At o9 Solutions, we value team spirit, transparency, frequent communication, and offer a flat organizational structure with an entrepreneurial culture. We provide a supportive network, a diverse international working environment, and a work-life balance. If you are passionate about learning, adapting to new technology, and making a difference in a scale-up environment, we encourage you to apply and be part of our mission to digitally transform planning and decision-making for enterprises and the planet.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

Working at Tech Holding provides you with an opportunity to be part of a full-service consulting firm dedicated to delivering high-quality solutions and predictable outcomes to clients. Our team, comprising industry veterans with experience in both emerging startups and Fortune 50 firms, has developed a unique approach based on deep expertise, integrity, transparency, and dependability. We are currently seeking a Cloud Architect with at least 9 years of experience to assist in building functional systems that enhance customer experience. Your responsibilities will include: Monitoring & Observability: - Setting up and configuring Datadog and Grafana for comprehensive system metric monitoring and visualization. - Developing alerting systems to proactively identify and resolve potential issues. - Integrating monitoring tools with applications and infrastructure to ensure high observability. CI/CD: - Implementing and managing CI/CD pipelines using GitHub Actions, EKS, and Helm to automate build, test, and deployment processes. - Optimizing build times and deployment frequency to expedite development cycles. - Ensuring adherence to best practices for code quality, security, and compliance. Cloud Infrastructure: - Designing and overseeing the migration of Azure infrastructure to AWS with a focus on leveraging best practices and cloud-native technologies. - Managing and optimizing AWS and Azure environments, including cost management, resource allocation, and security. - Implementing and maintaining infrastructure as code (IaC) using tools like Terraform or AWS CloudFormation. Incident Management: - Implementing and managing incident response processes for efficient detection, response, and resolution of incidents. - Collaborating with development, operations, and security teams to identify root causes and implement preventative measures. - Maintaining incident response documentation and conducting regular drills to enhance readiness. Migration: - Leading the migration of ECS services to EKS while ensuring minimal downtime and data integrity. - Optimizing EKS clusters for performance and scalability. - Implementing best practices for container security and management. CDN Management: - Managing and optimizing the Akamai CDN solution to efficiently deliver content. - Configuring CDN settings for caching, compression, and security. - Monitoring CDN performance and troubleshooting issues. Technology Stack: - Proficiency in Python or Go for scripting and automation. - Experience with Mux Enterprise for reporting, monitoring, and alerting. - Familiarity with relevant technologies and tools such as Kubernetes, Docker, Ansible, and Jenkins. Qualifications: - Bachelor's degree in computer science, engineering, or a related field. - Minimum of 7 years of experience in DevOps or a similar role. - Strong understanding of cloud platforms (AWS and Azure) and their services. - Expertise in Python or Go Lang and monitoring/observability tools (Datadog, Grafana). - Proficiency in CI/CD pipelines and tools (GitHub Actions, EKS, Helm). - Experience with infrastructure as code (IaC) tools (Terraform, AWS CloudFormation). - Knowledge of containerization technologies (Docker, Kubernetes). - Excellent problem-solving, troubleshooting, and communication skills. - Ability to work independently and collaboratively within a team. Employee Benefits include flexible work timings, work from home options as needed, family insurance policy, various leave benefits, and opportunities for learning and development.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

As a Technical Leader in our organization, you will play a pivotal role in architecting, designing, developing, and supporting a diverse suite of applications. Your expertise will be crucial in leading the design of secure and scalable software architectures, as well as developing data processing engines capable of handling large volumes of data. You will be driving the development of web-based platforms dedicated to healthcare analytics, and championing the adoption of best practices in software development, database management, and analytics implementation. In addition to your technical leadership responsibilities, you will also be instrumental in team building and mentorship. Your role will involve recruiting, onboarding, and cultivating a high-performing technical team. By mentoring team members and fostering skill development in coding, software architecture, and data visualization tools, you will contribute to promoting a collaborative and innovative team culture within our organization. Furthermore, your engagement with clients will be essential as you provide technical insights during discussions and presentations. Working closely with stakeholders, you will refine technical solutions to align them with business goals, ensuring a seamless integration of technology and strategic objectives. Key qualifications for this role include proficiency in leading the design of secure and scalable software architectures utilizing a wide range of technologies such as AWS/Azure, Docker, React.js, Node.js, Hasura, APIs, Microservices, Django, Flask, terraform, and more. Your hands-on experience in data analysis and software engineering, along with expertise in Python and other coding languages like SQL, R, or JavaScript, will be highly valued. Experience in web-based dashboard development using tools like Tableau, Power BI, or custom frameworks, as well as familiarity with US-based pharma clients and datasets, would be advantageous. Leadership and management skills are essential, with a strong emphasis on building and managing technical teams, problem-solving abilities, and a strategic mindset. A preferred experience of 8+ years in technical roles as a Solutions Architect or a similar position would be beneficial for this role. Joining our team offers you the opportunity to work on transformative healthcare projects in a collaborative and inclusive work environment. We provide a competitive salary, performance-based bonuses, and professional development opportunities to support your growth and success in this dynamic role.,

Posted 1 week ago

Apply

0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Are you the DevOps engineer who believes in building systems that are not only resilient but also future-proof? If you relish the challenge of transforming chaotic legacy infrastructures into sleek, automated ecosystems, we want you. We're on the hunt for an AWS infrastructure virtuoso who thrives under pressure, ensuring our acquired products run like clockwork with 99.9% uptime. Your mission? To migrate diverse product stacks into a unified, scalable AWS environment, complete with robust monitoring and automation to minimize incidents. While most roles focus on maintaining a singular tech stack, we're looking for someone eager to consolidate and refine multiple stacks, enhancing efficiency without missing a beat. If you're not interested in merely sustaining existing systems but are passionate about redesigning fragmented environments into cohesive ones, this is your calling. You'll be at the helm of infrastructure transformations, from AI-driven automation and performance tuning to database migrations and cost optimization. This includes troubleshooting, executing seamless cloud migrations with minimal downtime, and automating half your tasks using AI/ML workflows. You'll wield genuine decision-making power, free from bureaucratic delays. If you're a proactive problem-solver who thrives on refining complex systems to achieve impeccable performance, this role offers the autonomy and challenge you crave. But if you prefer predictable projects or require constant guidance, this might not be the right fit. Ready to own a high-impact infrastructure role with opportunities for large-scale optimization and automation? Apply today! What You Will Be Doing Orchestrating intricate infrastructure transformations, including migrating legacy systems to AWS cloud and executing lift-and-shift migrations Crafting comprehensive monitoring strategies and automating deployments and operational workflows Engaging in system monitoring, backups, incident response, database migrations, configurations, and optimizing costs What You Won’t Be Doing Being bogged down by Jira or endless status meetings - we prioritize solution-driven individuals over mere problem trackers Prolonging the life of obsolete systems - you'll have the mandate to enact substantial enhancements Getting tangled in bureaucratic approval processes - you'll have the autonomy to implement immediate fixes Limiting yourself to narrow technical specialties - this role demands a wide-ranging expertise Struggling for budget for essential upgrades - we recognize the critical nature of infrastructure investments DevOps Engineer Key Responsibilities Enhance the reliability and standardization of our cloud infrastructure across a diverse product portfolio by implementing effective monitoring, automation, and adhering to AWS best practices. Basic Requirements Extensive expertise in AWS infrastructure (our primary platform - experience in other clouds won't suffice) Proficient programming skills in Python or JavaScript for automation and tool development Proven experience in managing and migrating production databases with various engines (including MySql, Postgres, Oracle, MS-SQL) Advanced skills in Docker/Kubernetes Proficiency in infrastructure automation tools (Terraform, Ansible, or CloudFormation) Expertise in Linux systems administration About Trilogy Hundreds of software businesses run on the Trilogy Business Platform. For three decades, Trilogy has been known for 3 things: Relentlessly seeking top talent, Innovating new technology, and incubating new businesses. Our technological innovation is spearheaded by a passion for simple customer-facing designs. Our incubation of new businesses ranges from entirely new moon-shot ideas to rearchitecting existing projects for today's modern cloud-based stack. Trilogy is a place where you can be surrounded with great people, be proud of doing great work, and grow your career by leaps and bounds. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $50 USD/hour, which equates to $100,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5236-IN-Jaipur-DevOpsEngineer.004

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

You are a skilled Software Engineer with expertise in Node.js and Java, particularly in migrating applications from Node.js to Java Microservices. Your background includes 5 years of experience in software development, strong proficiency in Node.js, and a solid understanding of Java and Microservices architecture. You have hands-on experience with Spring Boot, RESTful APIs, and a good grasp of database technologies (SQL/NoSQL). Additionally, you are familiar with cloud platforms such as AWS, Azure, or GCP, and have knowledge of containerization and orchestration tools like Docker and Kubernetes. Experience with DevOps tools like CI/CD, Jenkins, and Terraform would be advantageous. Exposure to event-driven architecture using Kafka or RabbitMQ, along with an understanding of GraphQL or gRPC, are considered good-to-have skills. You also have familiarity with performance tuning and optimization techniques. If you are passionate about backend development and enjoy transitioning applications from Node.js to Java Microservices, we are interested in hearing from you!,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Lead / Staff Software Engineer in Black Duck SRE team, you will play a key role in transforming our R&D products through the adoption of advanced cloud, Containerization, Microservices, modern software delivery and other cutting edge technologies. You will be a key member of the team, working independently to develop tools and scripts, automated provisioning, deployment, and monitoring. The position is based in Bangalore (Near Dairy Circle Flyover) with a Hybrid work mode. Key Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - Minimum of 5-7 years of experience in Site Reliability Engineering / DevOps Engineering. - Strong hands-on experience with Containerization & Orchestration using Docker, Kubernetes (K8s), Helm to Secure, optimize, and scale K8s. - Deep understanding of Cloud Platforms & Services in AWS / GCP / Azure (Preferably GCP) cloud to Optimize cost, security, and performance. - Solid experience with Infrastructure as Code (IaC) using Terraform / CloudFormation / Pulumi (Preferably Terraform) - Write modules, manage state. - Proficient in Scripting & Automation using Bash, Python / Golang - Automate tasks, error handling. - Experienced in CI/CD Pipelines & GitOps using Git / GitHub / GitLab / Bitbucket / ArgoCD, Harness.io - Implement GitOps for deployments. - Strong background in Monitoring & Observability using Prometheus / Grafana / ELK Stack / Datadog / New Relic - Configure alerts, analyze trends. - Good understanding in Networking & Security using Firewalls, VPN, IAM, RBAC, TLS, SSO, Zero Trust - Implement IAM, TLS, logging. - Experience with Backup & Disaster Recovery using Velero, Snapshots, DR Planning - Implement backup solutions. - Basic Understanding messaging concepts using RabbitMQ / Kafka / Pub,Sub / SQS. - Familiarity with Configuration Management using Ansible / Chef / Puppet / SaltStack - Run existing playbooks. Key Responsibilities: - Design and develop scalable, modular solutions that promote reuse and are easily integrated into our diverse product suite. - Collaborate with cross-functional teams to understand their needs and incorporate user feedback into the development. - Establish best practices for modern software architecture, including Microservices, Serverless computing, and API-first strategies. - Drive the strategy for Containerization and orchestration using Docker, Kubernetes, or equivalent technologies. - Ensure the platform's infrastructure is robust, secure, and compliant with industry standards. What We Offer: - An opportunity to be a part of a dynamic and innovative team committed to making a difference in the technology landscape. - Competitive compensation package, including benefits and flexible work arrangements. - A collaborative, inclusive, and diverse work environment where creativity and innovation are valued. - Continuous learning and professional development opportunities to grow your expertise within the industry.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a DevOps Operational Support for this project, you will have the opportunity to contribute to the data management architecture of industry-leading software. You will work closely with cross-functional teams and regional experts to design, implement, and support solutions with a focus on data security and global availability to facilitate data-driven decisions for our customers. This is your chance to work on a stable, long-term project with a global client, focusing on digital transformation and change management. Exciting Opportunities await as you work in squads under our customer's direction, utilizing Agile methodologies and Scrum. Contribute to an innovative application that guides and documents the sales order process, aids in market analysis, and ensures competitive pricing. Be part of a team that integrates digital and human approvals, ensuring seamless integration with a broader ecosystem of applications. Collaborate with reputed global clients, delivering top-notch solutions, and join high-caliber project teams with front-end, back-end, and database developers that offer ample opportunities to learn, grow, and advance your career. If you possess strong technical skills, effective communication abilities, and a commitment to security, we want you on our team! Ready to make an impact Apply now and be part of our journey to success! Responsibilities: - Solve Operational Challenges by working with global teams to find creative solutions for customers across our software catalog. - Plan, provision, and configure enterprise-level solutions for customers on a global scale during Customer Deployments. - Monitor customer environments to proactively identify and resolve issues while providing support for incidents in Monitoring and Troubleshooting tasks. - Leverage and maintain automation pipelines to handle all stages of the software lifecycle under Automation responsibilities. - Write and maintain documentation for processes, configurations, and procedures in Documentation tasks. - Lead the team in troubleshooting environment failures within SRE MTTR goals to meet SRE & MTTR Goals. - Collaborate closely with stakeholders to define project requirements and deliverables and understand their needs and challenges. - Ensure the highest standards in coding and security, with a strong emphasis on protecting systems and data by Implementing Best Practices. - Take an active role in defect triage, strategy, and architecture planning in Strategize and Plan activities. - Ensure database performance and resolve development problems to Maintain Performance. - Translate requirements into high-quality solutions, adhering to Agile methodologies to Deliver Quality solutions. - Conduct detailed design reviews to ensure alignment with approved architecture in Review and Validate processes. - Work with application development teams throughout development, deployment, and support phases in Collaborate tasks. Mandatory Skills: Technical Skills: - Database technologies: RDBMS (Postgres preferred), no-SQL (Cassandra preferred) - Software languages: Java, Python, NodeJS, Angular - Cloud Platforms: AWS - Cloud Managed Services: Messaging, Server-less Computing, Blob Storage - Provisioning (Terraform, Helm) - Containerization (Docker, Kubernetes preferred) - Version Control: Git Qualification and Soft Skills: - Bachelors degree in Computer Science, Software Engineering, or a related field - Customer-driven and result-oriented focus - Excellent problem-solving and troubleshooting skills - Ability to work independently and as part of a team - Strong communication and collaboration skills - Strong desire to stay up to date with the latest trends and technologies in the field Nice-to-Have Skills: - Cloud Technologies: RDS, Azure - Knowledge in the E&P Domain (Geology, Geophysics, Well, Seismic, or Production data types) - GIS experience is desirable Languages: English: C2 Proficient,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

As a Cloud DevOps Engineer at Oracle's CGIU Enterprise Communications Platform engineering team, you will play a crucial role in supporting the development team throughout their DevOps life cycle. Your expertise in cloud technologies, DevOps practices, and best methodologies will be essential in helping the team achieve their business goals and maintain a competitive edge. Your responsibilities will include designing, implementing, and managing automated build, deployment, and testing systems. You will lead initiatives to enhance build and deployment processes for high-volume, high availability systems while monitoring production systems for performance and availability, proactively resolving any issues that arise. Developing and maintaining infrastructure as code using tools like Terraform and creating CI/CD pipelines with GitLab CI/CD will be key tasks. Continuous improvement of system scalability and security, adherence to standard methodologies, and collaboration with multi-functional teams for successful project delivery will be part of your daily activities. You will also work closely with the security team to ensure compliance with industry standards and implement security measures to safeguard against threats. Mandatory Skills: - 7+ years of experience as a DevOps Engineer - Bachelor's degree in engineering or Computer Science - Proficiency in Java/Python programming and experience with AWS or other public Cloud platforms - Hands-on experience with Terraform, GitLab CI, Jenkins, Docker, Kubernetes, and troubleshooting within Kubernetes environment - Scripting skills in Bash/Python, familiarity with REST APIs, and a strong background in Linux - Expertise in developing and maintaining CI/CD pipelines and a solid understanding of DevOps culture and Agile Methodology Good to have: - Experience in SaaS and multi-tenant development - Knowledge of cloud security and cybersecurity in a cloud context - Familiarity with Java, ELK stack, and prior experience in telecom and networking Soft Skills: - Excellent command of spoken and written English - Ability to multitask and adapt to changing priorities - Strong team skills, proactive attitude, focus on quality, and drive to make a difference in a fast-paced environment Joining Oracle's dynamic engineering division will involve active participation in defining and evolving standard practices and procedures. You will be responsible for software development tasks associated with designing, developing, and debugging software applications or operating systems. Oracle, a global leader in cloud solutions, thrives on innovation and inclusivity. With a commitment to fostering an inclusive workforce that empowers everyone to contribute, Oracle offers a diverse range of global opportunities with a focus on work-life balance, competitive benefits, and support for employee well-being. At Oracle, we value diversity and inclusion, supporting employees with disabilities throughout the employment process. If you require accessibility assistance or accommodation due to a disability, please reach out to us at accommodation-request_mb@oracle.com or call +1 888 404 2494 in the United States.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies