Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 - 9.0 years
7 - 12 Lacs
Mumbai
Work from Office
Site Reliability Engineers (SREs) - Robust background in Google Cloud Platform (GCP) | RedHat OpenShift administration Responsibilities: System Reliability: Ensure the reliability and uptime of critical services and infrastructure. Google Cloud Expertise: Design, implement, and manage cloud infrastructure using Google Cloud services. Automation: Develop and maintain automation scripts and tools to improve system efficiency and reduce manual intervention. Monitoring and Incident Response: Implement monitoring solutions and respond to incidents to minimize downtime and ensure quick recovery. Collaboration: Work closely with development and operations teams to improve system reliability and performance. Capacity Planning: Conduct capacity planning and performance tuning to ensure systems can handle future growth. Documentation: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Qualifications: Education: Bachelors degree in computer science, Engineering, or a related field. Experience: 4+ years of experience in site reliability engineering or a similar role. Skills: Proficiency in Google Cloud services (Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, etc.). Familiarity with Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.) Experience with automation tools (Terraform, Ansible, Puppet). Familiarity with CI/CD pipelines and tools (Azure pipelines Jenkins, GitLab CI, etc.). Strong scripting skills (Python, Bash, etc.). Knowledge of networking concepts and protocols. Experience with monitoring tools (Prometheus, Grafana, etc.). Preferred Certifications: Google Cloud Professional DevOps Engineer Google Cloud Professional Cloud Architect Red Hat Certified Engineer (RHCE) or similar Linux certification
Posted 1 day ago
4.0 years
0 Lacs
Kerala, India
Remote
About FriskaAi FriskaAi is a powerful AI-enabled, EHR-agnostic platform designed to help healthcare providers adopt an evidence-based approach to care. Our technology addresses up to 80% of chronic diseases, including obesity and type 2 diabetes, enabling better patient outcomes. ๐ Location: Remote ๐ผ Job Type: Full-Time Job Description We are seeking a highly skilled Backend Developer to join our team. The ideal candidate will have expertise in Python and Django , with experience in SQL and working in a cloud-based environment on Microsoft Azure . You will be responsible for designing, developing, and optimizing backend systems that drive our healthcare platform and ensure seamless data flow and integration. Key Responsibilities Backend Development Develop and maintain scalable backend services using Python and Django. Build and optimize RESTful APIs for seamless integration with frontend and third-party services. Implement efficient data processing and business logic to support platform functionality. Database Management Design and manage database schemas using Azure SQL or PostgreSQL. Write and optimize SQL queries, stored procedures, and functions. Ensure data integrity and security through proper indexing and constraints. API Development & Integration Develop secure and efficient RESTful APIs for frontend and external integrations. Ensure consistent and reliable data exchange between systems. Optimize API performance and scalability. Cloud & Infrastructure Deploy and manage backend applications on Azure App Service and Azure Functions. Set up and maintain CI/CD pipelines using Azure DevOps. Implement monitoring and logging using Azure Application Insights. Microservices Architecture Design and implement microservices to modularize backend components. Ensure smooth communication between services using messaging queues or REST APIs. Optimize microservices for scalability and fault tolerance. Testing & Debugging Write unit and integration tests using Pytest. Debug and resolve production issues quickly and efficiently. Ensure code quality and reliability through regular code reviews. Collaboration & Optimization Work closely with frontend developers, product managers, and stakeholders. Conduct code reviews to maintain high-quality standards. Optimize database queries, API responses, and backend processes for maximum performance. Qualifications Education & Experience ๐ Bachelorโs degree in Computer Science, Engineering, or a related field (or equivalent experience) ๐น 2โ4 years of backend development experience Technical Skills โ Proficiency in Python and Django โ Strong expertise in SQL (e.g., Azure SQL, PostgreSQL, MySQL) โ Experience with RESTful API design and development โ Familiarity with microservices architecture โ Hands-on experience with Azure services, including: โข Azure App Service โข Azure Functions โข Azure Storage โข Azure Key Vault โ Experience with CI/CD using Azure DevOps โ Proficiency with version control tools like Git โ Knowledge of containerization with Docker Soft Skills ๐น Strong problem-solving skills and attention to detail ๐น Excellent communication and teamwork abilities ๐น Ability to thrive in a fast-paced, agile environment Preferred Skills (Nice to Have) โ Experience with Kubernetes (AKS) for container orchestration โ Knowledge of Redis for caching โ Experience with Celery for asynchronous task management โ Familiarity with GraphQL for data querying โ Understanding of infrastructure as code (IaC) using Terraform or Bicep What We Offer โ Competitive salary & benefits package โ Opportunity to work on cutting-edge AI-driven solutions โ A collaborative and inclusive work environment โ Professional development & growth opportunities ๐ If youโre passionate about backend development and eager to contribute to innovative healthcare solutions, weโd love to hear from you! ๐ Apply now and be part of our mission to transform healthcare! Show more Show less
Posted 1 day ago
0.0 years
0 Lacs
Vijay Nagar, Indore, Madhya Pradesh
On-site
Job Title: AWS DevOps Engineer Internship Company: Inventurs Cube LLP Location: Indore, Madhya Pradesh Job Type: Full-time Internship Duration: 1 to 3 months Responsibilities: Assist in the design, implementation, and maintenance of AWS infrastructure using Infrastructure as Code (IaC) principles (e.g., CloudFormation, Terraform). Learn and apply CI/CD (Continuous Integration/Continuous Deployment) pipelines for automated software releases. Support the monitoring and logging of AWS services to ensure optimal performance and availability. Collaborate with development teams to understand application requirements and implement appropriate cloud solutions. Help troubleshoot and resolve infrastructure-related issues. Participate in security best practices implementation and review. Contribute to documentation of cloud architecture, configurations, and processes. Stay updated with the latest AWS services and DevOps trends. What We're Looking For: Currently pursuing a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Basic understanding of cloud computing concepts, preferably AWS. Familiarity with at least one scripting language (e.g., Python, Bash). Knowledge of Linux/Unix operating systems. Eagerness to learn and a strong problem-solving aptitude. Excellent communication and teamwork skills. Ability to work independently and take initiative. Bonus Points (Not Mandatory, but a Plus): Prior experience with AWS services (e.g., EC2, S3, VPC, IAM). Basic understanding of version control systems (e.g., Git). Exposure to containerization technologies (e.g., Docker, Kubernetes). Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI, AWS CodePipeline). What You'll Gain: Hands-on experience with industry-leading AWS cloud services and DevOps tools. Mentorship from experienced AWS DevOps engineers. Exposure to real-world projects and agile development methodologies. Opportunity to build a strong foundation for a career in cloud and DevOps. A dynamic and supportive work environment in Indore. Certificate of internship completion. [ Optional: Mention if there's a possibility of full-time employment after successful completion of the internship.] Job Types: Full-time, Fresher, Internship Contract length: 3 months Pay: โน15,000.00 - โน20,000.00 per month Schedule: Day shift Work Location: In person Speak with the employer +91 9685458368
Posted 1 day ago
11.0 - 21.0 years
30 - 45 Lacs
Mumbai Suburban, Navi Mumbai, Mumbai (All Areas)
Work from Office
Min 11 to 20 yrs with exp in tools like Azure DevOps Jenkins GitLab GitHub Docker Kubernetes Terraform Ansible Exp on Dockerfile & Pipeline codes Exp automating tasks using Shell Bash PowerShell YAML Exposure in .NET Java ProC PL/SQL Oracle/SQL REDIS Required Candidate profile Exp in DevOps platform from ground up using tools at least for 2 projects Implement in platform for Req tracking cod mgmt release mgmt Exp in tools such as AppDynamics Prometheus Grafana ELK Stack Perks and benefits Addnl 40% Variable + mediclaim
Posted 1 day ago
5.0 - 10.0 years
6 - 16 Lacs
Pune, Mumbai (All Areas)
Work from Office
Role & responsibilities Proven experience with CI/CD tools like Red Hat Ansible, Kubernetes, Prometheus, GitHub, Atlassian Jira, Confluence, and Jenkins. Must have Groovy/Shell scripting knowledge. Good to have Python, Perl or Ruby scripting knowledge Practical familiarity with public cloud resources and services, like Google Cloud. Good to have Terraform knowledge Familiarity with various IT monitoring and management tools like Datadog. Proficiency with container technologies like Docker and Kubernetes. Proficiency in troubleshooting and resolving technical issues across test and production environments. 5+ years of experience as a DevOps Engineer Candidate should be able to work independently without much/ minimum guidance
Posted 1 day ago
8.0 - 15.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Company Name: ANZ Experience: 8+ Years Location: Bangalore (Hybrid) Interview Mode: Virtual Interview Rounds: 2 Rounds Notice Period: Immediate to 30 days Generic Responsibilities : Design, develop, test, deploy and maintain scalable Node.js applications using microservices architecture. Collaborate with cross-functional teams to identify requirements and implement solutions that meet business needs. Ensure high availability, scalability, security and performance of the application by implementing monitoring tools such as Kafka or Kubernetes. Troubleshoot issues related to API integrations with third-party services like Terraform. Generic Requirements : 8-15 years of experience in software development with expertise in Node.js programming language. Strong understanding of microservices architecture principles and design patterns. Experience with containerization using Kubernetes or similar technologies (e.g. Ansible). Proficiency in working with message queues like Kafka for building real-time data pipelines.
Posted 1 day ago
4.0 - 6.0 years
7 - 9 Lacs
Pune
Work from Office
Managing stakeholders and external interfaces, will be responsible for the smooth operation of a company's IT infrastructure, must have a deep understanding of both development and operations processes, as well as a strong technical background
Posted 1 day ago
162.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Birlasoft: Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job โ : Familiar with Cloud Engineering to leverage Cloud and DevOps based technologies provided by the Platform teams. Collaborates with the Product Manager to align technical solutions with business goals and serves as the escalation point for cloud engineering issues. Support the Product technical architecture, alignment to the technology roadmap, and technical engineering standards. Job Title - Sr Technical Lead Location: Pune Educational Background: Bachelor's degree in Computer Science, Information Technology, or related field. Key Responsibilities - This individual will assist with setting up and provisioning architecture, optimizing efforts for infrastructure, deploying best practices and excellence in automation techniques. Some great technical skillsets for this individual to possess would be the following: Azure or AWS certifications DevOps certification Scripting certification (preferably python) Previous Agile experience Experience with at least some automation tools such as ansible, puppet, Chef, Salt, and Terraform. Exp 6-9 years Show more Show less
Posted 1 day ago
7.0 - 12.0 years
12 - 18 Lacs
Pune, Chennai, Coimbatore
Hybrid
Hiring "Azure & devops" for Pune/Chennai/Coimbatore Locations. Overall Experience: 6- 12 yrs If you are interested in the below-mentioned position, please share your updated CV to sandhya_allam@epam.com along with the following details: Shortlisted applicants will be contacted directly. 1. Have you applied for a role in EPAM in the recent times 2. Years of Experience in Azure Cloud and DevOps Solutions 3. Years of Experience in Docker & Kubernetes 4. Years of Experience in Terraform 5. Experience in python/Bash/powershell : 6. Current Salary 7.Expected Salary 8. Notice Period (Negotiable or Mandate Responsibilities : Responsible for fault-tolerance, high-availability, scalability, and security on AZURE Infra and Platform. Responsible for implementation of CI/CD pipelines with automated build and test systems. Responsible for Production Deployment using Multiple Deployment Strategies. Responsible for Automating the AZURE Infrastructure and Platform Deployment with IAAC. Responsible for Automating System Configurations using Configuration Management Tools. Hands on Production Experience with AZURE Compute Service: VM Management, VMSS, AKS, Container Instance, Autoscaling, Load Balancers, Spot Instances, App Service,. Hands on Production Experience with AZURE Network Service: VNET, Subnets, Express Route, Azure Gateway, VPN, Load Balancer, DNS, Traffic Manager, CDN, Front Door, Private Link, Network Watcher Good Automation Skills using AZURE Orchestration Tools- Terraform, Ansible, ARM & CLI. Hands on Production experience in Docker and Container Orchestration using AKS, ACR. Ability to write scripts (Linux/shell/Python/PowerShell/Bash/CLI) to automate Cloud Automation Tasks
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Description Job Title : Python developer with AWS Experience : 5+yrs Location : Hyderabad Notice period: Imemdiate joiners only(0-10days) Primary skills: Python developer, AWS(S3, EC2, Lambda, API) Detailed Job Description 5+ years of work experience using Python and AWS for developing enterprise software applications Experience in Apache Kafka, including topic creation, message optimization, and efficient message processing Skilled in Docker and container orchestration tools such as Amazon EKS or ECS Strong experience managing AWS components, including Lambda (Java), API Gateway, RDS, EC2, CloudWatch Experience working in an automated DevOps environment, using tools like Jenkins, SonarQube, Nexus, and Terraform for deployments Hands-on experience with Java-based web services, RESTful approaches, ORM technologies, and SQL procedures in Java. Experience with Git for code versioning and commit management Experience working in Agile teams with a strong focus on collaboration and iterative development Ability to implement changes following standard turnover procedures, with a CI/CD focus Bachelors or Masters degree in computer science, Information Systems or equivalent Skills Python Developer ,Api design,Architecture, AWS, Oops, S3, Django, fast API, Flask Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
India
Remote
About the Role At Ceryneian, weโre building a next-generation, research-driven algorithmic trading platform aimed at democratizing access to hedge fund-grade financial analytics. Headquartered in California, Ceryneian is a fintech innovation company dedicated to empowering traders with sophisticated yet accessible tools for quantitative research, strategy development, and execution. Our flagship platform is currently under development. As our DevOps Engineer , you will bridge our backend systems (strategy engine, broker APIs) and frontend applications (analytics dashboards, client portals). You will own the design and execution of scalable infrastructure, CI/CD automation, and system observability in a high-frequency, multi-tenant trading environment. This role is central to deploying our containerized strategy engine (Lean-based), while ensuring data integrity, latency optimization, and cost-efficient scalability. We are a remote-first team and are open to hiring exceptional candidates globally. Key Responsibilities Design secure, scalable environments for containerized, multi-tenant API services and user-isolated strategy runners. Implement low-latency cloud infrastructure across development, staging, and production environments. Automate the CI/CD lifecycle, from pipeline design to versioned production deployment (GitHub Actions, GitLab CI, etc.). Manage Dockerized containers and orchestrate deployment with Kubernetes, ECS, or similar systems. Collaborate with backend and frontend teams to define infrastructure and deployment workflows. Optimize and monitor high-throughput data pipelines for strategy engines using tools like ClickHouse. Integrate observability stacks: Prometheus, Grafana, ELK, or Datadog for logs, metrics, and alerts. Support automated rollbacks, canary releases, and resilient deployment practices. Automate infrastructure provisioning using Terraform or Ansible (Infrastructure as Code). Ensure system security, audit readiness (SOC2, GDPR, SEBI), and comprehensive access control logging. Contribute to high-availability architecture and event-driven design for alerting and strategy signals. Technical Competencies Required Cloud: AWS (preferred), GCP, or Azure. Containerization: Proficiency with Docker and orchestration tools (Kubernetes, ECS, etc.). CI/CD: Experience with YAML-based pipelines using GitHub Actions, GitLab CI/CD, or similar tools. Data Systems: Familiarity with PostgreSQL, MongoDB, ClickHouse, or Supabase. Monitoring: Setup and scaling of observability tools like Prometheus, ELK Stack, or Datadog. Distributed Systems: Strong understanding of scalable microservices, caching, and message queues. Event-Driven Architecture: Experience with Kafka, Redis Streams, or AWS SNS/SQS (preferred). Cost Optimization: Ability to build cold-start strategy runners and enable cloud auto-scaling. 0โ3 years of experience. Nice-to-Haves Experience with real-time or high-frequency trading systems. Familiarity with broker integrations and exchange APIs (e.g., Zerodha, Dhan). Understanding of IAM, role-based access control systems, and multi-region deployments. Educational background from Tier-I or Tier-II institutions with strong CS fundamentals, passion for scalable infrastructure, and a drive to build cutting-edge fintech systems. What We Offer Opportunity to shape the core DevOps and infrastructure for a next-generation fintech product. Exposure to real-time strategy execution, backtesting systems, and quantitative modeling. Competitive compensation with performance-based bonuses. Remote-friendly culture with async-first communication. Collaboration with a world-class team from Pomona, UCLA, Harvey Mudd, and Claremont McKenna. Show more Show less
Posted 1 day ago
0.0 - 5.0 years
0 Lacs
Chetput, Chennai, Tamil Nadu
On-site
Job Description: Azure Infrastructure Engineer Exp: 7+ Years CTC: 20 LPA Notice period: Immediate โ 15days Base Location: Chennai (Onsite - Saudi Arabia (KSA)) Profile source: Anywhere in India Timings: 1:00pm-10:00pm Work Mode: WFO (Mon-Fri) We are looking for an Azure Infrastructure Engineer with 3โ5 years of experience who understands cloud architecture and security best practices aligned with the Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM). The candidate will be responsible for designing, implementing, and managing secure and scalable infrastructure on Microsoft Azure, ensuring compliance with CSA security principles and regulatory standards. Key Responsibilities: Design and deploy Azure infrastructure with a security-first mindset, aligned with CSA CCM and Azure Well- Architected Framework. Implement identity and access controls (RBAC, Azure AD, MFA, Conditional Access) as per CSA IAM domain. Ensure data protection using Azure encryption capabilities (at-rest, in-transit, and in-use). Deploy network security architectures (NSGs, Azure Firewall, Private Link, ExpressRoute) compliant with CSA and NIST guidelines. Enable security monitoring and incident response with Azure Defender, Sentinel, and Security Center. Map and document infrastructure against CSA CCM controls. Ensure infrastructure is compliant with CIS Benchmarks, ISO 27001, and CSA STAR guidelines. Automate infrastructure provisioning with ARM templates, Bicep, or Terraform, integrating security guardrails. Perform periodic vulnerability assessments and remediation aligned with CSA guidelines. Required Skills & Qualifications: 3โ5 years of experience in Azure cloud infrastructure. Strong hands-on experience in Azure IaaS (VMs, VNETs, Storage, Load Balancers, etc.). In-depth knowledge of Azure security tools (Azure Security Center, Defender for Cloud, Sentinel). Familiarity with Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) and CAIQ. Strong understanding of identity and access management principles. Proficient in scripting (PowerShell, Azure CLI) and IaC (ARM/Bicep/Terraform). Experience working in regulated industries (e.g., healthcare, finance) is a plus. Certifications (Preferred): Microsoft Certified: Azure Security Engineer Associate (AZ-500) Microsoft Certified: Azure Solutions Architect Expert CSA CCSK (Certificate of Cloud Security Knowledge) or CCSP Soft Skills: Excellent documentation and communication skills. Ability to translate compliance requirements into technical controls. Strong collaboration skills with security, operations, and compliance teams. Job Type: Full-time Pay: From โน60,000.00 per month Schedule: Night shift Supplemental Pay: Performance bonus Ability to commute/relocate: Chetput, Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Experience: total work: 5 years (Preferred) Work Location: In person
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Veeam, the #1 global market leader in data resilience, believes businesses should control all their data whenever and wherever they need it. Veeam provides data resilience through data backup, data recovery, data portability, data security, and data intelligence. Based in Seattle, Veeam protects over 550,000 customers worldwide who trust Veeam to keep their businesses running. Weโre looking for a Platform Engineer to join the Veeam Data Cloud. The mission of the Platform Engineering team is to provide a secure, reliable, and easy to use platform to enable our teams to build, test, deploy, and monitor the VDC product. This is an excellent opportunity for someone with cloud infrastructure and software development experience to build the worldโs most successful, modern, data protection platform. Your tasks will include: Write and maintain code to automate our public cloud infrastructure, software delivery pipeline, other enablement tools, and internally consumed platform services Document system design, configurations, processes, and decisions to support our async, distributed team culture Collaborate with a team of remote engineers to build the VDC platform Work with a modern technology stack based on containers, serverless infrastructure, public cloud services, and other cutting-edge technologies in the SaaS domain On-call rotation for product operations Technologies we work with: Kubernetes, Azure AKS, AWS EKS, Helm, Docker, Terraform, Golang, Bash, Git, etc. What we expect from you: 3+ years of experience in production operations for a SaaS (Software as a Service) or cloud service provider Experience automating infrastructure through code using technologies such as Pulumi or Terraform Experience with GitHub Actions Experience with a breadth and depth of public cloud services Experience building and supporting enterprise SaaS products Understanding of the principles of operational excellence in a SaaS environment. Possessing scripting skills in languages like Bash or Python Understanding and experience implementing secure design principles in the cloud Demonstrated ability to learn new technologies quickly and implement those technologies in a pragmatic manner A strong bias toward action and direct, frequent communication A university degree in a technical field Will be an advantage: Experience with Azure Experience with high-level programming languages such as Go, Java, C/C++, etc. We offer: Family Medical Insurance Annual flexible spending allowance for health and well-being Life insurance Personal accident insurance Employee Assistance Program A comprehensive leave package, including parental leave Meal Benefit Pass Transportation Allowance Monthly Daycare Allowance Veeam Care Days โ additional 24 hours for your volunteering activities Professional training and education, including courses and workshops, internal meetups, and unlimited access to our online learning platforms (Percipio, Athena, OโReilly) and mentoring through our MentorLab program Please note: If the applicant is permanently located outside India, Veeam reserves the right to decline the application. #Hybrid Veeam Software is an equal opportunity employer and does not tolerate discrimination in any form on the basis of race, color, religion, gender, age, national origin, citizenship, disability, veteran status or any other classification protected by federal, state or local law. All your information will be kept confidential. Please note that any personal data collected from you during the recruitment process will be processed in accordance with our Recruiting Privacy Notice. The Privacy Notice sets out the basis on which the personal data collected from you, or that you provide to us, will be processed by us in connection with our recruitment processes. By applying for this position, you consent to the processing of your personal data in accordance with our Recruiting Privacy Notice. Show more Show less
Posted 1 day ago
12.0 - 20.0 years
6 - 16 Lacs
Pune
Work from Office
Role & responsibilities Preferred candidate profil We are seeking a skilled and results-driven Azure DevOps Engineer with hands-on experience in Azure cloud services, Infrastructure as Code (IaC) using Terraform and/or Bicep, and modern DevOps practices. You will play a key role in designing, implementing, and maintaining scalable, secure, and automated cloud infrastructure. # Responsibilities - Design, build, and maintain Azure infrastructure using Infrastructure as Code (Terraform and/or Bicep). - Develop and manage CI/CD pipelines using Azure DevOps or GitHub Actions to automate build, test, and deployment processes. - Collaborate with architects, developers, and security teams to implement best practices for cloud infrastructure, security, and compliance. - Manage Azure resources (VMs, Networking, Storage, AKS, App Services, etc.) with automation and IaC. - Monitor, troubleshoot, and optimize infrastructure for performance, reliability, and cost. - Implement security controls and policies (Identity, RBAC, Key Vault, firewalls, etc.) in Azure environments. - Maintain documentation for infrastructure, procedures, and standards. - Participate in on-call rotation and incident response as needed. #Required Skills & Qualifications - Hands-on experience with Azure DevOps Architect (IaaS, PaaS, networking, security). - Strong proficiency with Terraform and/or Bicep for infrastructure automation. - Experience with Azure DevOps, GitHub Actions, or equivalent CI/CD platforms. - Proficient in scripting languages (e.g., PowerShell, Bash). - Solid understanding of networking, security, and identity concepts in cloud environments. - Experience with version control systems (Git). - Familiarity with monitoring tools (Azure Monitor, Log Analytics). - Strong troubleshooting and analytical skills. - Excellent communication and teamwork abilities. Preferred/Bonus Skills - Azure certifications (e.g., AZ-104, AZ-400, AZ-305). - Knowledge of other cloud platforms (Azure). e
Posted 1 day ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP Dev Ops Engr to join our team in Ban/Hyd/Chn/Gur/Noida, Karnฤtaka (IN-KA), India (IN). Responsibilities Design, implement, and manage GCP infrastructure using Infrastructure as Code (IaC) tools. Develop and maintain CI/CD pipelines to improve development workflows. Monitor system performance and ensure high availability of cloud resources. Collaborate with development teams to streamline application deployments. Maintain security best practices and compliance across the cloud environment. Automate repetitive tasks to enhance operational efficiency. Troubleshoot and resolve infrastructure-related issues in a timely manner. Document procedures, policies, and configurations for the infrastructure. Skills Google Cloud Platform (GCP) Terraform Ansible CI/CD Kubernetes Docker Python Bash/Shell Scripting Monitoring tools (e.g., Prometheus, Grafana) Cloud Security Jenkins Git About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Req ID: 327296 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP Solution Architect to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Job Description: Primary Skill: Cloud-Infrastructure-Google Cloud Platform Minimum work experience: 8+ yrs Total Experience: 8+ Years Must have GCP Solution Architect Certification& GKE Mandatory Skills: Technical Qualification/ Knowledge: Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc.. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment , business case creation , design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform Compute Engine , Compute Engine Managed Instance Groups , Kubernetes Cloud Storage , Cloud Storage for Firebase , Persistant Disk , Local SSD , Filestore , Transfer Service Virtual Private Network (VPC), Cloud DNS , Cloud Interconnect , Cloud VPN Gateway , Network Load Balancing , Global load balancing , Firewall rules , Cloud Armor Cloud IAM , Resource Manager , Multi-factor Authentication , Cloud KMS Cloud Billing , Cloud Console , Stackdriver Cloud SQL, Cloud Spanner SQL, Cloud Bigtable Cloud Run Container services, Kubernetes Engine (GKE) , Anthos Service Mesh , Cloud Functions , PowerShell on GCP Solid understanding and experience in cloud computing based services architecture, technical design and implementations including IaaS, PaaS, and SaaS. Design of clients Cloud environments with a focus on mainly on GCP and demonstrate Technical Cloud Architectural knowledge. Playing a vital role in the design of production, staging, QA and development Cloud Infrastructures running in 24x7 environments. Delivery of customer Cloud Strategies, aligned with customers business objectives and with a focus on Cloud Migrations and DR strategies Nurture Cloud computing expertise internally and externally to drive Cloud Adoption Should have a deep understanding of IaaS and PaaS services offered on cloud platforms and understand how to use them together to build complex solutions. Ensure that all cloud solutions follow security and compliance controls, including data sovereignty. Deliver cloud platform architecture documents detailing the vision for how GCP infrastructure and platform services support the overall application architecture, interaction with application, database and testing teams for providing a holistic view to the customer. Collaborate with application architects and DevOps to modernize infrastructure as a service (IaaS) applications to Platform as a Service (PaaS) Create solutions that support a DevOps approach for delivery and operations of services Interact with and advise business representatives of the application regarding functional and non-functional requirements Create proof-of-concepts to demonstrate viability of solutions under consideration Develop enterprise level conceptual solutions and sponsor consensus/approval for global applications. Have a working knowledge of other architecture disciplines including application, database, infrastructure, and enterprise architecture. Identify and implement best practices, tools and standards Provide consultative support to the DevOps team for production incidents Drive and support system reliability, availability, scale, and performance activities Evangelizes cloud automation and be a thought leader and expert defining standards for building and maintaining cloud platforms. Knowledgeable about Configuration management such as Chef/Puppet/Ansible. Automation skills using CLI scripting in any language (bash, perl, python, ruby, etc) Ability to develop a robust design to meet customer business requirement with scalability, availability, performance and cost effectiveness using GCP offerings Ability to identify and gather requirements to define an architectural solution which can be successfully built and operate on GCP Ability to conclude high level and low level design for the GCP platform which may also include data center design as necessary Capabilities to provide GCP operations and deployment guidance and best practices throughout the lifecycle of a project Understanding the significance of the different metrics for monitoring, their threshold values and should be able to take necessary corrective measures based on the thresholds Knowledge on automation to reduce the number of incidents or the repetitive incidents are preferred Good knowledge on the cloud center operation, monitoring tools, backup solution GKE Set up monitoring and logging to troubleshoot a cluster, or debug a containerized application. Manage Kubernetes Objects Declarative and imperative paradigms for interacting with the Kubernetes API. Managing Secrets Managing confidential settings data using Secrets. Configure load balancing, port forwarding, or setup firewall or DNS configurations to access applications in a cluster. Configure networking for your cluster. Hands-on experience with terraform. Ability to write reusable terraform modules. Hands-on Python and Unix shell scripting is required. understanding of CI/CD Pipelines in a globally distributed environment using Git, Artifactory, Jenkins, Docker registry. Experience with GCP Services and writing cloud functions. Hands-on experience deploying and managing Kubernetes infrastructure with Terraform Enterprise. Ability to write reusable terraform modules. Certified Kubernetes Administrator (CKA) and/or Certified Kubernetes Application Developer (CKAD) is a plus Experience using Docker within container orchestration platforms such as GKE. Knowledge of setting up splunk Knowledge of Spark in GKE Certification: GCP solution architect & GKE Process/ Quality Knowledge: Must have clear knowledge on ITIL based service delivery ITIL certification is desired Knowledge on quality Knowledge on security processes About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Introduction: EkVayu Tech is a fast growing, research focused, technology company specializing in developing IT and AI applications. Our projects span modern front-end development, robust backend systems, cloud-native and on-prem infrastructure, AI/ML enablement, and automated testing pipelines. We are looking for a visionary technical leader to guide our engineering team and architecture strategy as we scale. We are having products in the area of Cybersecurity/ AI/ML/DL, Signal Processing, System Engineering and Health-Tech. Job Title: Tech Architect / VP of Engineering / Tech Lead, Experience Level: Senior / Leadership Location: Noida Sector 62, UP, India Role Overview As a Tech Architect / Engineering VP / Tech Lead, you will be responsible for driving the overall engineering strategy, leading architecture and design decisions, managing development teams, and ensuring scalable, high-performance delivery of products. Youโll work closely with founders, product teams, and clients to define and deliver cutting-edge solutions that leverage AI and full-stack technologies. Key Responsibilities ๏ท Architectural Leadership: o Design and evolve scalable, secure, and performant architecture across front-end, backend, and AI services. o Guide tech stack choices, frameworks, and tools aligned with business goals. o Lead cloud/on-prem infrastructure decisions, including CI/CD, containerization, and DevOps automation. ๏ท Engineering Management: o Build and mentor a high-performing engineering team. o Define engineering best practices, coding standards, and technical workflows. o Own technical delivery timelines and code quality benchmarks. ๏ท Hands-on Development & Technical Oversight: o Contribute to critical system components and set examples in code quality and documentation. o Oversee implementation of RESTful APIs, microservices, AI modules, and integration plugins. o Champion test-driven development and automated QA processes. ๏ท AI Enablement: o Guide development of AI-enabled features, data pipelines, and model integration (working with MLOps/data teams). o Drive adoption of tools that enhance AI-assisted development and intelligent systems. ๏ท Infrastructure & Deployment: o Architect hybrid environments across cloud and on-prem setups. o Optimize deployment pipelines using tools like Docker, Kubernetes, GitHub Actions, or similar. o Implement observability solutions for performance monitoring and issue resolution. Required Skills & Experience ๏ท 8+ years of experience in software engineering, with 3+ years in a leadership/architect role. ๏ท Strong proficiency in: o Frontend: React.js, Next.js o Backend: Python, Django, FastAPI o AI/ML Integration: Working knowledge of ML model serving, APIs, or pipelines ๏ท Experience building and scaling systems in hybrid (cloud/on-prem) environments. ๏ท Hands-on with CI/CD, testing automation, and modern DevOps workflows. ๏ท Experience with plugin-based architectures and extensible systems. ๏ท Deep understanding of security, scalability, and performance optimization. ๏ท Ability to translate business needs into tech solutions and communicate across stakeholders. Preferred (Nice to Have) ๏ท Experience with OpenAI API, LangChain, or custom AI tooling environments. ๏ท Familiarity with infrastructure-as-code (Terraform, Ansible). ๏ท Background in SaaS product development or AI-enabled platforms. ๏ท Knowledge of container orchestration (Kubernetes) and microservice deployments. What We Offer ๏ท Competitive compensation ๏ท Opportunity to shape core technology in a fast-growing company ๏ท Exposure to cutting-edge AI applications and infrastructure challenges ๏ท Collaborative and open-minded team culture How to Apply Send your resume, portfolio (if applicable), and a brief note on why youโre excited to join us to HR@EkVayu.com Show more Show less
Posted 1 day ago
12.0 - 19.0 years
10 - 20 Lacs
Pune
Work from Office
Job Description: #Overview We are seeking a skilled and results-driven Azure DevOps Engineer with hands-on experience in Azure cloud services, Infrastructure as Code (IaC) using Terraform and/or Bicep, and modern DevOps practices. You will play a key role in designing, implementing, and maintaining scalable, secure, and automated cloud infrastructure. # Responsibilities - Design, build, and maintain Azure infrastructure using Infrastructure as Code (Terraform and/or Bicep). - Develop and manage CI/CD pipelines using Azure DevOps or GitHub Actions to automate build, test, and deployment processes. - Collaborate with architects, developers, and security teams to implement best practices for cloud infrastructure, security, and compliance. - Manage Azure resources (VMs, Networking, Storage, AKS, App Services, etc.) with automation and IaC. - Monitor, troubleshoot, and optimize infrastructure for performance, reliability, and cost. - Implement security controls and policies (Identity, RBAC, Key Vault, firewalls, etc.) in Azure environments. - Maintain documentation for infrastructure, procedures, and standards. - Participate in on-call rotation and incident response as needed. #Required Skills & Qualifications - Hands-on experience with Azure DevOps Architect (IaaS, PaaS, networking, security). - Strong proficiency with Terraform and/or Bicep for infrastructure automation. - Experience with Azure DevOps, GitHub Actions, or equivalent CI/CD platforms. - Proficient in scripting languages (e.g., PowerShell, Bash). - Solid understanding of networking, security, and identity concepts in cloud environments. - Experience with version control systems (Git). - Familiarity with monitoring tools (Azure Monitor, Log Analytics). - Strong troubleshooting and analytical skills. - Excellent communication and teamwork abilities. Preferred/Bonus Skills - Azure certifications (e.g., AZ-104, AZ-400, AZ-305). - Knowledge of other cloud platforms (Azure). Education & Experience - Bachelors degree in Computer Science, Information Technology, or related field, or equivalent experience. - 5+ years experience in cloud infrastructure and DevOps roles. Location - [On-site / Remote / Hybrid] (Customize as needed)
Posted 1 day ago
0.0 - 1.0 years
0 Lacs
Indore, Madhya Pradesh
On-site
Responsibilities: Develop and maintain infrastructure as code (IaC) to support scalable and secure infrastructure. Collaborate with the development team to streamline and optimize the continuous integration and deployment pipeline. Manage and administer Linux systems, ensuring reliability and security. Configure and provision cloud resources on AWS, Google Cloud, or Azure as required. Implement and maintain containerized environments using Docker and orchestration with Kubernetes. Monitor system performance and troubleshoot issues to ensure optimal application uptime. Stay updated with industry best practices, tools, and DevOps methodologies. Enhance software development processes through automation and continuous improvement initiatives. Requirements: Degree(s): B.Tech/BE (CS, IT, EC, EI) or MCA. Eligibility: Open to 2021, 2022, and 2023 graduates and postgraduates only. Expertise in Infrastructure as Code (IaC) with tools like Terraform and CloudFormation. Proficiency in software development using languages such as Python, Bash, and Go. Experience in Continuous Integration with tools such as Jenkins, Travis CI, and CircleCI. Strong Linux system administration skills. Experience in provisioning, configuring, and managing cloud resources (AWS, Google Cloud Platform, or Azure). Excellent verbal and written communication skills. Experience with containerization and orchestration tools such as Docker and Kubernetes. Job Type: Full-time Pay: โน45,509.47 - โน85,958.92 per month Benefits: Health insurance Schedule: Day shift Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Python: 1 year (Preferred) AI/ML: 1 year (Preferred) Location: Indore, Madhya Pradesh (Preferred) Work Location: In person
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
Delhi, India
On-site
Greetings from TCS!! TCS is hiring for Azure with Terraform role Exp: 8 to 12 years Mandatory skill: Azure , Compute, Storage, DNS, Terraform Interview mode: Face to Face Interview Date: 21 Jun 25 (saturday) Interview venue - Yamuna park - Delhi Job Description: Design and deploy scalable, highly available, and fault-tolerant systems on Azure. Proven experience with Microsoft Azure services (Compute, Storage, Networking, Security). โข Strong understanding of networking concepts (DNS, VPN, VNet, NSG, Load Balancers). โข Manage and monitor cloud infrastructure using Azure Monitor, Log Analytics, and other tools. โข Implement and manage virtual networks, storage accounts, and Azure Active Directory. โข Hands-on experience with Infrastructure as Code (IaC) tools like ARM, Terraform. Experience with scripting languages (PowerShell, Bash, or Python). โข Ensure security best practices and compliance standards are followed. โข Troubleshoot and resolve issues related to cloud infrastructure and services. โข Experience in DevOps to support CI/CD pipelines and containerized applications (AKS, Docker). โข Optimize cloud costs and performance. โข Familiarity with Azure DevOps, GitHub Actions, or other CI/CD tools. โข Experience in identity and access management (IAM), RBAC, and Azure AD. Please share me the updated CV with below details if your interested Overall exp: Relevant exp: Current Organisation: Highest qualification: Current CTC: Ectc: Notice period: Current location: Preferred location: Gap if any: Available for F2F discussion on 21Jun(Saturday) Y/N: Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
India
Remote
Who we are We're a leading, global security authority that's disrupting our own category. Our encryption is trusted by the major ecommerce brands, the world's largest companies, the major cloud providers, entire country financial systems, entire internets of things and even down to the little things like surgically embedded pacemakers. We help companies put trust - an abstract idea - to work. That's digital trust for the real world. Job summary As a DevOps Engineer, you will play a pivotal role in designing, implementing, and maintaining our infrastructure and deployment processes. You will collaborate closely with our development, operations, and security teams to ensure seamless integration of code releases, infrastructure automation, and continuous improvement of our DevOps practices. This role places a strong emphasis on infrastructure as code with Terraform, including module design, remote state management, policy enforcement, and CI/CD integration. You will manage authentication via Auth0, maintain secure network and identity configurations using AWS IAM and Security Groups, and oversee the lifecycle and upgrade management of AWS RDS and MSK clusters. Additional responsibilities include managing vulnerability remediation, containerized deployments via Docker, and orchestrating production workloads using AWS ECS and Fargate. What you will do Design, build, and maintain scalable, reliable, and secure infrastructure solutions on cloud platforms such as AWS, Azure, or GCP. Implement and manage continuous integration and continuous deployment (CI/CD) pipelines for efficient and automated software delivery. Develop and maintain infrastructure as code (IaC) โ with a primary focus on Terraform โ including building reusable, modular, and parameterized modules for scalable infrastructure. Securely manage Terraform state using remote backends (e.g., S3 with DynamoDB locks) and establish best practices for drift detection and resolution. Integrate Terraform into CI/CD pipelines with automated plan, apply, and policy-check gating Conduct testing and validation of Terraform code using tools such as Terratest, Checkov, or equivalent frameworks. Design and manage network infrastructure, including VPCs, subnets, routing, NAT gateways, and load balancers. Configure and manage AWS IAM roles, policies, and Security Groups to enforce least-privilege access control and secure application environments. Administer and maintain Auth0 for user authentication and authorization, including rule scripting, tenant settings, and integration with identity providers. Build and manage containerized applications using Docker, deployed through AWS ECS and Fargate for scalable and cost-effective orchestration. Implement vulnerability management workflows, including image scanning, patching, dependency management, and CI-integrated security controls. Manage RDS and MSK infrastructure, including lifecycle and version upgrades, high availability setup, and performance tuning. Monitor system health, performance, and capacity using tools like Prometheus, ELK, or Splunk; proactively resolve bottlenecks and incidents. Collaborate with development and security teams to resolve infrastructure issues, streamline delivery, and uphold compliance. What you will have Bachelor's degree in Computer Science, Engineering, or related field, or equivalent work experience. 6+ years in DevOps or similar role, with strong experience in infrastructure architecture and automation. Advanced proficiency in Terraform, including module creation, backend management, workspaces, and integration with version control and CI/CD. Experience with remote state management using S3 and DynamoDB, and implementing Terraform policy-as-code with OPA/Sentinel. Familiarity with Terraform testing/validation tools such as Terratest, InSpec, or Checkov. Strong background in cloud networking, VPC design, DNS, and ingress/egress control. Proficient with AWS IAM, Security Groups, EC2, RDS, S3, Lambda, MSK, and ECS/Fargate. Hands-on experience with Auth0 or equivalent identity management platforms. Proficient in container technologies like Docker, with production deployments via ECS/Fargate. Solid experience in vulnerability and compliance management across the infrastructure lifecycle. Skilled in scripting (Python, Bash, PowerShell) for automation and tooling development. Experience in monitoring/logging using Prometheus, ELK stack, Grafana, or Splunk. Excellent troubleshooting skills in cloud-native and distributed systems. Effective communicator and cross-functional collaborator in Agile/Scrum environments. Nice to have Terraform (Intermediate) โข AWS (IAM, Security Groups, RDS, MSK, ECS/Fargate, Cloudwatch) โข Docker โข CI/CD (GitLab, Jenkins) โข Auth0 โข Python/Bash Benefits Generous time off policies Top shelf benefits Education, wellness and lifestyle support Show more Show less
Posted 1 day ago
40.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Career Level - IC3 Responsibilities As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products. Responsibilities Working with the team to develop and maintain full stack SaaS solutions. Collaborate with engineering and product teams, contribute to the definition of specifications for new features, and own the development of those features. Define and implement web services and the application backend microservices. Implement and/or assist with the web UI/UX development. Be a champion for cloud native best practices. Have proactive mindset about bug fixes, solving bottlenecks and addressing performance issues. Maintain code quality, organization, and automatization. Ensure testing strategy is followed within the team. Support the services you build in production. Essential Skills And Background Expert knowledge of Java Experience with micro-service development at scale. Experience working with Kafka Experience with automated test frameworks at the unit, integration and acceptance levels. Use of source code management systems such as git Preferred Skills And Background Knowledge of issues related to scalable, fault-tolerant architectures. Knowledge of Python Experience with SQL and RDMS (Oracle and/or MySQL preferred). Experience deploying applications in Kubernetes with Helm Experience with devops tools such as Prometheus and Grafana. Experience in Agile development methodology. Experience in terraform is preferred. Use of build tools like gradle and maven Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrowโs technology to tackle todayโs challenges. Weโve partnered with industry-leaders in almost every sectorโand continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. Thatโs why weโre committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. Weโre committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veteransโ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or masterโs degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or masterโs degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title: Senior Data Engineer (AWS Expert) Location: Ahmedabad Experience: 5+ Years Company: IGNEK Shift Time: 2 PM - 11 PM IST About IGNEK: IGNEK is a fast-growing custom software development company with over a decade of industry experience and a passionate team of 25+ experts. We specialize in crafting end-to-end digital solutions that empower businesses to scale efficiently and stay ahead in an ever-evolving digital world. At IGNEK, we believe in quality, innovation, and a people-first approach to solving real-world challenges through technology. We are looking for a highly skilled and experienced Data Engineer with deep expertise in AWS cloud technologies and strong hands-on experience in backend development, data pipelines, and system design. The ideal candidate will take ownership of delivering robust and scalable solutions while collaborating closely with cross-functional teams and the tech lead. Key Responsibilities: โ Lead and manage the end-to-end implementation of cloud-native data solutions on AWS. โ Design, build, and maintain scalable data pipelines (PySpark/Spark) and data lake architectures (Delta Lake 3.0 or similar). โ Migrate on-premises systems to modern, scalable AWS-based services. hr@ignek.com +91-9328495160 www.ignek.com โ Engineer robust relational databases using Postgres or Oracle with a strong understanding of procedural languages. โ Collaborate with the tech lead to understand business requirements and deliver practical, scalable solutions. โ Integrate newly developed features following defined SDLC standards using CI/CD pipelines. โ Develop orchestration and automation workflows using tools like Apache Airflow. โ Ensure all solutions comply with security best practices, performance benchmarks, and cloud architecture standards. โ Monitor, debug, and troubleshoot issues across multiple environments. โ Stay current with new AWS features, services, and trends to drive continuous platform improvement. Required Skills and Experience: โ 5+ years of professional experience in data engineering and backend development. โ Strong expertise in Python, Scala, and PySpark. โ Deep knowledge of AWS services: EC2, S3, Lambda, RDS, Kinesis, IAM, API Gateway, and others. hr@ignek.com +91-9328495160 www.ignek.com โ Hands-on experience with Postgres or Oracle, and building relational data stores. โ Experience with Spark clusters, Delta Lake, Glue Catalogue, and large-scale data processing. โ Proven track record of end-to-end project delivery and third-party system integrations. โ Solid understanding of microservices, serverless architectures, and distributed computing. โ Skilled in Java, Bash scripting, and search tools like Elasticsearch. โ Proficient in using CI/CD tools (e.g., GitLab, GitHub, AWS CodePipeline). โ Experience working with Infrastructure as Code (Iac) using Terraform. โ Hands-on experience with Docker, containerization, and cloud-native deployments. Preferred Qualifications: โ AWS Certifications (e.g., AWS Certified Solutions Architect or similar). โ Exposure to Agile/Scrum project methodologies. โ Familiarity with Kubernetes, advanced networking, and cloud security practices. โ Experience managing or collaborating with onshore/offshore teams. hr@ignek.com +91-9328495160 www.ignek.com Soft Skills: โ Excellent communication and stakeholder management. โ Strong leadership and problem-solving abilities. โ Team player with a collaborative mindset. โ High ownership and accountability in delivering quality outcomes. Why Join IGNEK? โ Work on exciting, large-scale digital transformation projects. โ Be part of a people-centric, innovation-driven culture. โ A flexible work environment and opportunities for continuous learning. How to Apply: Please send your resume and a cover letter detailing your experience to hr@ignek.com Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.
These cities are known for their strong tech presence and have a high demand for Terraform professionals.
The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.
In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.
Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.
plan
and apply
commands. (medium)As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.