Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or masterβs degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less
Posted 13 hours ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title: Senior Data Engineer (AWS Expert) Location: Ahmedabad Experience: 5+ Years Company: IGNEK Shift Time: 2 PM - 11 PM IST About IGNEK: IGNEK is a fast-growing custom software development company with over a decade of industry experience and a passionate team of 25+ experts. We specialize in crafting end-to-end digital solutions that empower businesses to scale efficiently and stay ahead in an ever-evolving digital world. At IGNEK, we believe in quality, innovation, and a people-first approach to solving real-world challenges through technology. We are looking for a highly skilled and experienced Data Engineer with deep expertise in AWS cloud technologies and strong hands-on experience in backend development, data pipelines, and system design. The ideal candidate will take ownership of delivering robust and scalable solutions while collaborating closely with cross-functional teams and the tech lead. Key Responsibilities: β Lead and manage the end-to-end implementation of cloud-native data solutions on AWS. β Design, build, and maintain scalable data pipelines (PySpark/Spark) and data lake architectures (Delta Lake 3.0 or similar). β Migrate on-premises systems to modern, scalable AWS-based services. hr@ignek.com +91-9328495160 www.ignek.com β Engineer robust relational databases using Postgres or Oracle with a strong understanding of procedural languages. β Collaborate with the tech lead to understand business requirements and deliver practical, scalable solutions. β Integrate newly developed features following defined SDLC standards using CI/CD pipelines. β Develop orchestration and automation workflows using tools like Apache Airflow. β Ensure all solutions comply with security best practices, performance benchmarks, and cloud architecture standards. β Monitor, debug, and troubleshoot issues across multiple environments. β Stay current with new AWS features, services, and trends to drive continuous platform improvement. Required Skills and Experience: β 5+ years of professional experience in data engineering and backend development. β Strong expertise in Python, Scala, and PySpark. β Deep knowledge of AWS services: EC2, S3, Lambda, RDS, Kinesis, IAM, API Gateway, and others. hr@ignek.com +91-9328495160 www.ignek.com β Hands-on experience with Postgres or Oracle, and building relational data stores. β Experience with Spark clusters, Delta Lake, Glue Catalogue, and large-scale data processing. β Proven track record of end-to-end project delivery and third-party system integrations. β Solid understanding of microservices, serverless architectures, and distributed computing. β Skilled in Java, Bash scripting, and search tools like Elasticsearch. β Proficient in using CI/CD tools (e.g., GitLab, GitHub, AWS CodePipeline). β Experience working with Infrastructure as Code (Iac) using Terraform. β Hands-on experience with Docker, containerization, and cloud-native deployments. Preferred Qualifications: β AWS Certifications (e.g., AWS Certified Solutions Architect or similar). β Exposure to Agile/Scrum project methodologies. β Familiarity with Kubernetes, advanced networking, and cloud security practices. β Experience managing or collaborating with onshore/offshore teams. hr@ignek.com +91-9328495160 www.ignek.com Soft Skills: β Excellent communication and stakeholder management. β Strong leadership and problem-solving abilities. β Team player with a collaborative mindset. β High ownership and accountability in delivering quality outcomes. Why Join IGNEK? β Work on exciting, large-scale digital transformation projects. β Be part of a people-centric, innovation-driven culture. β A flexible work environment and opportunities for continuous learning. How to Apply: Please send your resume and a cover letter detailing your experience to hr@ignek.com Show more Show less
Posted 13 hours ago
10.0 years
0 Lacs
Mohali district, India
On-site
SourceFuse Technologies hiring Technical Architect with 10+ years of experience. Overview: You will work on a high-scale production application that handles thousands of transactions daily, with the goal of re-engineering and evolving it to support millions of transactions. In addition to architecting robust backend systems, you will play a key role in enabling intelligent, functional, and aesthetic user experiences β leveraging the latest in AI and automation to boost team productivity and product intelligence. This role also includes the exploration and integration of Generative AI technologies and AI-enhanced developer tooling to streamline development, testing, and delivery cycles. Key Responsibility: Collaborate closely with development and delivery teams to enable scalable, high-performance software solutions. Participate in client meetings as a technical expert to gather business and technical requirements and translate them into actionable solutions. Remain technology-agnostic, with a strong awareness of emerging tools, including AI-based solutions. Architect and present technical solutions aligned with business objectives and innovation goals. Lead R&D initiatives around Generative AI, AI-driven automation , and productivity tooling to enhance engineering efficiency and code quality. Create and maintain technical documentation, architecture diagrams, and AI-assisted design drafts. Work cross-functionally with clients and project teams to capture requirements and devise intelligent, future-ready solutions. Identify opportunities to integrate AI-based code generation, automated testing, and AI-enhanced observability into the SDLC. Mentor teams on the adoption of GenAI tools (e.g., GitHub Copilot, ChatGPT, Amazon CodeWhisperer) and establish governance around their responsible use. Drive innovation in architecture by exploring AI/ML APIs, LLM-based recommendation systems, and intelligent decision engines. Education: More than formal degrees, we're looking for someone who has the skills, curiosity, and initiative to deliver the responsibilities mentioned above β including the capacity to evaluate and leverage AI-driven technologies for real-world challenges. Skills & Abilities: Deep understanding of AWS and other public cloud platforms. Expert-level experience in full stack development using Node.js and Angular. Proficiency in architecting solutions from the ground up, from concept to production. Strong advocate of Test-Driven Development with hands-on implementation. Practical knowledge of microservices architecture , patterns, and scalability principles. Awareness and practical usage of observability tools and distributed tracing . Familiarity with OpenTelemetry and related observability frameworks. Experience with cloud-native development , containerization (e.g., Docker), and deployment on Kubernetes. Working knowledge of Infrastructure as Code (IaC) tools like Terraform and Helm. Exposure to open-source frameworks and cloud-native stacks. Knowledge of LoopBack 4 is a plus. DBonus: Experience using or integrating GenAI tools for tasks such as: -Code scaffolding and refactoring -Automated documentation generation -Unit test cases generation -Intelligent API design -Semantic search and natural language processing -Prompt engineering and fine-tuning of LLMs Experience: 10+ years of relevant experience in software architecture and engineering. At least 1β2 years of practical exposure to AI-enhanced development workflows or GenAI technologies is highly desirable. Show more Show less
Posted 13 hours ago
7.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Role: GCP Database Migration Lead Required Technical Skill Set: GCP Database Migration Lead (Non-Oracle) Desired Experience Range: 8-10 yrs Location of Requirement: Kolkata/Delhi Notice period: Immediately Job Description: 7+ years of experience in database engineering or administration 3+ years of experience leading cloud-based database migrations , preferably to GCP Deep knowledge of traditional RDBMS (MS SQL Server, MariaDB, Oracle, MySQL) Strong hands-on experience with GCP database offerings (Cloud SQL, Spanner, Big Query, Fire store, etc.) Experience with schema conversion tools and strategies (e.g., DMS, SCT, custom ETL ) Solid SQL expertise and experience with data profiling, transformation, and validation Familiarity with IaC tools like Terraform and integration with CI/CD pipelines Strong problem-solving and communication skills. Show more Show less
Posted 13 hours ago
6.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Role: Application Tech Lead Required Technical Skill Set: Application Tech Lead Desired Experience Range: 8-10 yrs Location of Requirement: Kolkata/Delhi Notice period: Immediately Job Description: 6+ years of experience in software development, with 2+ years leading cloud-based projects on GCP Deep expertise in GCP services relevant to application development (e.g., Cloud Functions, GKE, App Engine, Pub/Sub) Strong programming skills in Java, Python, Go, or Node.js Experience with RESTful API design, backend systems, and front-end integration Knowledge of cloud security, IAM, monitoring, and cost optimization Experience with CI/CD tools (Cloud Build, Jenkins, GitLab CI), and Infrastructure as Code (Terraform, Deployment Manager) Familiarity with Agile/Scrum development methodologies. Show more Show less
Posted 13 hours ago
8.0 - 10.0 years
0 Lacs
Udaipur, Rajasthan, India
On-site
Company Description Kansoft Solutions is an IT Consulting and Digital Transformation company specializing in custom software applications and technology-driven business solutions across all platforms. Role Description AWS Cloud Architect with Cloud Formation and Terraform skillset Minimum 8 to 10 years of AWS Administrator experience involving design, Landing Zone deployment, Migration, and optimization. Design and develop AWS cloud solutions based on business requirements following AWS best practices. Create architectural blueprints, diagrams, and documentation for cloud infrastructure. Hands on Experience on AWS Terraform and Cloud formation automation for AWS resource deployment. Automation for Deployment of Landing zone via Control Tower, well architected framework using AWS accelerators Plan and execute cloud migration strategies from OnPrem workload to AWS Cloud or Cloud to Cloud Possess in-depth knowledge of AWS services, including EC2, S3, RDS, Lambda, and more. Hands on implementation Experience with AWS IaaS (networking, storage, virtual machines) Hands on implementation Experience of AWS Security, Monitoring and Auditing services, Experience of AWS VPC, Transit Gateway, Load Balancer, Route53-DNS Experience of AWS SSO implementation with Active directory Tenant integration Show more Show less
Posted 13 hours ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role: Python + microservices Experience range: 8-10 years Location: Current location must be Bangalore NOTE: Candidate interested for Walk-in drive in Bangalore must apply Job description: Preferred Qualifications: Experience with cloud platforms is a plus. Familiarity with Python frameworks (Flask, FastAPI, Django). Understanding of DevOps practices and tools (Terraform, Jenkins). Knowledge of monitoring and logging tools (Prometheus, Grafana, Stackdriver). Requirements: Proven experience as a Python developer, specifically in developing microservices. Strong understanding of containerization and orchestration (Docker, Kubernetes). Experience with Google Cloud Platform, specifically Cloud Run, Cloud Functions, and other related services. Familiarity with RESTful APIs and microservices architecture. Knowledge of database technologies (SQL and NoSQL) and data modelling. Proficiency in version control systems (Git). Experience with CI/CD tools and practices. Strong problem-solving skills and the ability to work independently and collaboratively. Excellent communication skills, both verbal and written. Show more Show less
Posted 13 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: MLOps Engineer Urgent β High Priority requirement. 1. Location - Hyderabad/ Pune 2. Interview Rounds - 4 round. 3. Contract - 12 Months About Client: We are a fast-growing boutique data engineering firm that empowers enterprises to manage and harness their data landscape efficiently. Leveraging advanced machine learning (ML) methodologies, Job Overview: We are seeking a highly skilled and motivated MLOps Engineer with 3β5 years of experience to join our engineering team. The ideal candidate should possess a strong foundation in DevOps or software engineering principles with practical exposure to machine learning operational workflows. You will be instrumental in operationalizing ML systems, optimizing the deployment lifecycle, and strengthening the integration between data science and engineering teams. Required Skills: β Hands-on experience with MLOps platforms such as MLflow and Kubeflow. β Proficiency in Infrastructure as Code (IaC) tools like Terraform or Ansible. β Strong familiarity with monitoring and alerting frameworks (Prometheus, Grafana, Datadog, AWS CloudWatch). β Solid understanding of microservices architecture, service discovery, and load balancing. β Excellent programming skills in Python, with experience in writing modular, testable, and maintainable code. β Proficient in Docker and container-based application deployments. β Experience with CI/CD tools such as Jenkins or GitLab CI. β Basic working knowledge of Kubernetes for container orchestration. β Practical experience with cloud-based ML platforms such as AWS SageMaker, Databricks, or Google Vertex AI. β Competency in Linux shell scripting and command-line operations. β Proficiency with Git and version control best practices. β Foundational knowledge of machine learning principles and typical ML workflow patterns. Good-to-Have Skills: β Awareness of security practices specific to ML pipelines, including secure model endpoints and data protection. β Experience with scripting languages like Bash or PowerShell for automation tasks. β Exposure to database scripting and data integration pipelines. Experience & Qualifications: β 3β5+ years of experience in MLOps, Site Reliability Engineering (SRE), or Software Engineering roles. β At least 2+ years of hands-on experience working on ML/AI systems in production settings. β Deep understanding of cloud-native architectures, containerization, and the end-to-end ML lifecycle. β Bachelorβs degree in Computer Science, Software Engineering, or a related technical field. β Relevant certifications such as AWS Certified DevOps Engineer β Professional are a strong plus. Show more Show less
Posted 13 hours ago
0 years
0 Lacs
India
On-site
Job Description: We are looking for a DevOps Engineer to join our team and help automate, manage, and streamline our development and deployment processes. The ideal candidate will have experience with cloud platforms, CI/CD pipelines, and infrastructure as code (IaC). Key Responsibilities: Design, build, and maintain efficient and reliable CI/CD pipelines. Automate infrastructure provisioning using tools like Terraform or CloudFormation. Monitor system performance and troubleshoot issues in development and production environments. Collaborate with development, QA, and operations teams to ensure smooth releases. Implement security best practices across cloud and on-premise environments. Manage containerized applications using Docker and orchestration tools like Kubernetes. Required Skills: Experience with cloud platforms (AWS, Azure, or GCP). Hands-on with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or Azure DevOps. Proficiency in containerization tools like Docker and orchestration using Kubernetes. Experience with infrastructure-as-code tools (Terraform, Ansible, Chef, or Puppet). Familiarity with monitoring tools (Prometheus, Grafana, ELK Stack, etc.). Strong knowledge of Linux systems , scripting (Bash, Python), and Git. Preferred Qualifications: Experience with microservices architecture. Familiarity with Agile/Scrum methodologies. Knowledge of networking and security fundamentals. Relevant certifications (AWS Certified DevOps Engineer, CKA, etc.) are a plus. Show more Show less
Posted 13 hours ago
40.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Career Level - IC3 Responsibilities As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products. Responsibilities Working with the team to develop and maintain full stack SaaS solutions. Collaborate with engineering and product teams, contribute to the definition of specifications for new features, and own the development of those features. Define and implement web services and the application backend microservices. Implement and/or assist with the web UI/UX development. Be a champion for cloud native best practices. Have proactive mindset about bug fixes, solving bottlenecks and addressing performance issues. Maintain code quality, organization, and automatization. Ensure testing strategy is followed within the team. Support the services you build in production. Essential Skills And Background Expert knowledge of Java Experience with micro-service development at scale. Experience working with Kafka Experience with automated test frameworks at the unit, integration and acceptance levels. Use of source code management systems such as git Preferred Skills And Background Knowledge of issues related to scalable, fault-tolerant architectures. Knowledge of Python Experience with SQL and RDMS (Oracle and/or MySQL preferred). Experience deploying applications in Kubernetes with Helm Experience with devops tools such as Prometheus and Grafana. Experience in Agile development methodology. Experience in terraform is preferred. Use of build tools like gradle and maven Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrowβs technology to tackle todayβs challenges. Weβve partnered with industry-leaders in almost every sectorβand continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. Thatβs why weβre committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. Weβre committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veteransβ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 13 hours ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or masterβs degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less
Posted 13 hours ago
2.0 - 4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Cohesity is the leader in AI-powered data security. Over 13,600 enterprise customers, including over 85 of the Fortune 100 and nearly 70% of the Global 500, rely on Cohesity to strengthen their resilience while providing Gen AI insights into their vast amounts of data. Formed from the combination of Cohesity with Veritasβ enterprise data protection business, the companyβs solutions secure and protect data on-premises, in the cloud, and at the edge. Backed by NVIDIA, IBM, HPE, Cisco, AWS, Google Cloud, and others, Cohesity is headquartered in Santa Clara, CA, with offices around the globe. Weβve been named a Leader by multiple analyst firms and have been globally recognized for Innovation, Product Strength, and Simplicity in Design , and our culture. Want to join the leader in AI-powered data security? Passionate about defending the world's data? Join Cohesity! Our passionate and highly skilled engineering team is proficient in building comprehensive data protection solutions to protect data of large enterprise customers across various on-premises, cloud environments. As a developer, you will be focused on developing and enhancing deployment and upgrade experience for large NetBackup deployments involving multiple hosts and making it seamless. How Youβll Spend Your Time Here Collaborate with stakeholders and team members to understand customer requirements and use cases. Brainstorm, design, and implement robust and scalable deployment automation solutions, ensuring timely delivery as per release milestones. Ensure high-quality output with diligent code reviews, thorough unit/automation testing, and stakeholder demos. Analyze, troubleshoot, and resolve complex issues found during internal testing and customer usage. WEβD LOVE TO TALK TO YOU IF YOU HAVE MANY OF THE FOLLOWING: Solid understanding of Windows/Linux operating systems and networking fundamentals. Proficiency and hands-on development experience (2 to 4 years) in Java, network programming, RESTful web services, and exposure to Python. Exposure to Ansible and Terraform. Strong coding, analytical, debugging, and troubleshooting skills. Understanding of cloud, data security, management, and protection concepts is a big plus. Highly motivated and passionate problem-solver who can dive deep to solve complex problems/issues and build quality products. Strong collaborator with great communication skills. Data Privacy Notice For Job Candidates For information on personal data processing, please see our Privacy Policy . In-Office Expectations Cohesity employees who are within a reasonable commute (e.g. within a forty-five (45) minute average travel time) work out of our core offices 2-3 days a week of their choosing. Show more Show less
Posted 13 hours ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
This is an incredible opportunity to be part of a company that has been at the forefront of AI and high-performance data storage innovation for over two decades. DataDirect Networks (DDN) is a global market leader renowned for powering many of the world's most demanding AI data centers, in industries ranging from life sciences and healthcare to financial services, autonomous cars, Government, academia, research and manufacturing. "DDN's A3I solutions are transforming the landscape of AI infrastructure." β IDC βThe real differentiator is DDN. I never hesitate to recommend DDN. DDN is the de facto name for AI Storage in high performance environmentsβ - Marc Hamilton, VP, Solutions Architecture & Engineering | NVIDIA DDN is the global leader in AI and multi-cloud data management at scale. Our cutting-edge data intelligence platform is designed to accelerate AI workloads, enabling organizations to extract maximum value from their data. With a proven track record of performance, reliability, and scalability, DDN empowers businesses to tackle the most challenging AI and data-intensive workloads with confidence. Our success is driven by our unwavering commitment to innovation, customer-centricity, and a team of passionate professionals who bring their expertise and dedication to every project. This is a chance to make a significant impact at a company that is shaping the future of AI and data management. Our commitment to innovation, customer success, and market leadership makes this an exciting and rewarding role for a driven professional looking to make a lasting impact in the world of AI and data storage. About the Role You will lead the design and implementation of scalable, secure, and highly available infrastructure across both cloud and on-premise environments. This role demands a deep understanding of Linux systems, infrastructure automation, and performance tuning, especially in high-performance computing (HPC) setups. As a technical leader, youβll collaborate closely with development, QA, and operations teams to drive DevOps best practices, tool adoption, and overall infrastructure reliability. Key Responsibilities: β’ Design, build, and maintain Linux-based infrastructure across cloud (primarily AWS) and physical data centers. β’ Implement and manage Infrastructure as Code (IaC) using tools such as CloudFormation, Terraform, Ansible, and Chef. β’ Develop and manage CI/CD pipelines using Jenkins, Git, and Gerrit to support continuous delivery. β’ Automate provisioning, configuration, and software deployments with Bash, Python, Ansible, etc. β’ Set up and manage monitoring/logging systems like Prometheus, Grafana, and ELK stack. β’ Optimize system performance and troubleshoot critical infrastructure issues related to networking, filesystems, and services. β’ Configure and maintain storage and filesystems including ext4, xfs, LVM, NFS, iSCSI, and potentially Lustre. β’ Manage PXE boot infrastructure using Cobbler/Kickstart, and create/maintain custom ISO images. β’ Implement infrastructure security best practices, including IAM, encryption, and firewall policies. β’ Act as a DevOps thought leader, mentor junior engineers, and recommend tooling and process improvements. β’ Maintain clear and concise documentation of systems, processes, and best practices. Collaborate with cross-functional teams to ensure reliable and scalable application delivery. Required Skills & Experience β’ 5+ years of experience in DevOps, SRE, or Infrastructure Engineering. β’ Deep expertise in Linux system administration, especially around storage, networking, and process control. β’ Strong proficiency in scripting (e.g., Bash, Python) and configuration management tools (Chef, Ansible). β’ Proven experience in managing on-premise data center infrastructure, including provisioning and PXE boot tools. β’ Familiar with CI/CD systems, Agile workflows, and Git-based source control (Gerrit/GitHub). β’ Experience with cloud services, preferably AWS, and hybrid cloud models. β’ Knowledge of virtualization (e.g., KVM, Vagrant) and containerization (Docker, Podman, Kubernetes). β’ Excellent communication, collaboration, and documentation skills Nice to Have β’ Hands-on with Lustre or other distributed/parallel filesystems. β’ Experience in HPC (High-Performance Computing) environments. β’ Familiarity with Kubernetes deployments in hybrid clusters Show more Show less
Posted 14 hours ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or masterβs degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less
Posted 14 hours ago
40.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Career Level - IC3 Responsibilities As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products. Responsibilities Working with the team to develop and maintain full stack SaaS solutions. Collaborate with engineering and product teams, contribute to the definition of specifications for new features, and own the development of those features. Define and implement web services and the application backend microservices. Implement and/or assist with the web UI/UX development. Be a champion for cloud native best practices. Have proactive mindset about bug fixes, solving bottlenecks and addressing performance issues. Maintain code quality, organization, and automatization. Ensure testing strategy is followed within the team. Support the services you build in production. Essential Skills And Background Expert knowledge of Java Experience with micro-service development at scale. Experience working with Kafka Experience with automated test frameworks at the unit, integration and acceptance levels. Use of source code management systems such as git Preferred Skills And Background Knowledge of issues related to scalable, fault-tolerant architectures. Knowledge of Python Experience with SQL and RDMS (Oracle and/or MySQL preferred). Experience deploying applications in Kubernetes with Helm Experience with devops tools such as Prometheus and Grafana. Experience in Agile development methodology. Experience in terraform is preferred. Use of build tools like gradle and maven Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrowβs technology to tackle todayβs challenges. Weβve partnered with industry-leaders in almost every sectorβand continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. Thatβs why weβre committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. Weβre committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veteransβ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 14 hours ago
0 years
0 Lacs
Greater Chennai Area
On-site
Job Responsibilities: Candidate should be an IaC (Infrastructure as Code) developer for this role. Candidate should expose to creating/updating AWS service(S3, EC2, SQS, Cloudformation, Lambda, KMS, ECS, ECR, Apigateway, Secret Manager, etc) using CDK (Cloud Development Kit). Candidate should know the GitHub action, python. Required Skills: CDK Cloudformation Lambda Code pipeline SQS AWS (ec2, kms, secretmanager, ssm, etc ), GitHub Python Terraform Nice to Have: Shell Scripting Strong knowledge in CI/CD Experience with container orchestration Development skills in JavaScript, TypeScript, and Python GitHub Actions Knowledge in SQL & NoSQL Databases Show more Show less
Posted 14 hours ago
5.0 years
0 Lacs
Greater Chennai Area
Remote
Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. Thatβs why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you donβt need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidatesβ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About The Team The Database Engineering team at Workday designs, builds, develops, maintains, and supervises database infrastructure, ensuring that all of Workday's data related needs are met with dedication and scale, while providing high availability that our customers expect from Workday. We are a fast paced and diverse team of database specialists and software engineers responsible for designing, automating, managing, and running the databases on Private and Public Cloud Platforms. We are looking for individuals who have strong experience in backend development specializing in database as a service with deep experience in Open-Source database technologies like MySQL, PostgreSQL, CloudSQL and other Cloud Native database technologies. This role will suit someone who is adaptable, flexible, and able to succeed within an open collaborative peer environment. We would love to hear from you if you have hands-on experience in designing, developing, and managing enterprise level database systems with complex interdependencies and have a key focus on high-availability, clustering, security, performance, and scalability requirements! Our team is the driving force behind all Workday operations, providing crucial support for all Lifecycle Engineering Operations. We ensure that Workdayβs maintenance and releases proceed without a hitch and are at the forefront of accelerating the transition to the Public Cloud. We enable Workdayβs Customer Success- 60% of Fortune 500 companies, 8000+ customers, 55M+ Workers. About The Role Are you passionate about database technologies? Do you love to solve complex, large-scale database challenges in the world today using code and as a service? If yes, then read on! This position is responsible for managing and monitoring Workday's production Database Infrastructure. Focus on automation to improve availability and scalability in our production environments. Work with developers to improve database resiliency and improve/implement auto remediation techniques. Provide support for large scale database instances across production, non-production and development environments. Serve in a rotational on-call and weekly maintenance supporting database infrastructure. About You Basic Qualifications: 5+ years of experience in managing and automating mission critical production workloads on MySQL, PostgreSQL, CloudSQL and other Cloud native databases. Hands-on experience with at least one Cloud technology: AWS, GCP and/or Azure Experience managing clustered, highly available database services deployed on different flavors of Linux. Experience in backend development using modern programming languages (Python, Golang,) Bachelor's degree in a computer related field or equivalent work experience Other Qualifications: Knowledge of automation tools such as Terraform, Chef, GitHub, JIRA confluence and Ansible. Working experience in modern DevOps technologies and container orchestration (Kubernetes, Docker), service deployment, monitoring and scaling. Strong scripting experience in multiple languages such as shell, python, ruby etc. Experience with database architecture, design, replication, clustering, HA/DR Strong analytical, debugging, and interpersonal skills. Self-starter, highly motivated and ability to learn quickly. Excellent team player with strong collaboration, analytical, verbal, and written communication skills Our Approach to Flexible Work With Flex Work, weβre combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process! Show more Show less
Posted 14 hours ago
40.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Career Level - IC3 Responsibilities As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products. Responsibilities Working with the team to develop and maintain full stack SaaS solutions. Collaborate with engineering and product teams, contribute to the definition of specifications for new features, and own the development of those features. Define and implement web services and the application backend microservices. Implement and/or assist with the web UI/UX development. Be a champion for cloud native best practices. Have proactive mindset about bug fixes, solving bottlenecks and addressing performance issues. Maintain code quality, organization, and automatization. Ensure testing strategy is followed within the team. Support the services you build in production. Essential Skills And Background Expert knowledge of Java Experience with micro-service development at scale. Experience working with Kafka Experience with automated test frameworks at the unit, integration and acceptance levels. Use of source code management systems such as git Preferred Skills And Background Knowledge of issues related to scalable, fault-tolerant architectures. Knowledge of Python Experience with SQL and RDMS (Oracle and/or MySQL preferred). Experience deploying applications in Kubernetes with Helm Experience with devops tools such as Prometheus and Grafana. Experience in Agile development methodology. Experience in terraform is preferred. Use of build tools like gradle and maven Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrowβs technology to tackle todayβs challenges. Weβve partnered with industry-leaders in almost every sectorβand continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. Thatβs why weβre committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. Weβre committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veteransβ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 14 hours ago
5.0 years
0 Lacs
India
Remote
Job Title : Devops Engineer Experience: 5+ Years Type: Contract (Short Term) Location: Remote Work Timing: UAE Time Zone Job Description We are seeking a skilled and motivated DevOps Engineer to join our team on a short-term contract basis. Youβll play a critical role in automating and streamlining operations, building and maintaining tools for deployment and monitoring, and ensuring the reliability and performance of our environments. Responsibilities: Automate infrastructure using tools like Terraform, Ansible, or CloudFormation. Collaborate across teams to ensure scalability, performance, and availability. Monitor system performance and troubleshoot issues across application, database, and infrastructure layers. Implement and manage container orchestration tools like Kubernetes or Docker Swarm. Follow and enforce security best practices in CI/CD and infrastructure processes. Manage and maintain cloud infrastructure on AWS, Azure, or GCP. Develop and maintain scripts/tools to enhance operational efficiency. Design, implement and maintain CI/CD pipelines for multiple applications Skills and Requirements: CI/CD tools: Jenkins, GitLab CI, Azure DevOps, CircleCI Containers & Orchestration: Docker, Kubernetes Cloud: AWS, Azure, GCP IaC & Automation: Terraform, Ansible, CloudFormation Scripting: Bash, Python, Groovy Monitoring: Prometheus, Grafana, ELK Stack, Splunk Configuration Management: Puppet, Chef OS & Networking: Linux, System Administration, Security Agile & DevOps practices Show more Show less
Posted 14 hours ago
6.0 years
0 Lacs
India
Remote
Location: Any metropolitan city Experience: 6+ years Key Focus: Java, DevOps, CI/CD, Docker, Kubernetes, Terraform About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. We are looking for a Senior DevOps Engineer (Java programming experience) to join our team, enabling our customer success. DevOps with development experience. At least 6+ years of solid experience BS or MS in Computer Science, Software Engineering, or a related technical field, or equivalent practical experience. 2+ years of experience as a full-stack engineer with strong proficiency in Java for backend development, Maven, and OpenRewrite for automated code upgrades Expertise in building, architecting, and deploying scalable, secure, and high-performance full-stack applications. Some experience designing and implementing RESTful APIs and microservices using Spring Boot. Experience designing infrastructure on AWS, including considerations for scalability, resilience, and cost-efficiency. Solid knowledge of core AWS services (e.g. EC2, S3, RDS, Lambda, API Gateway, CloudFormation, ECS/EKS) and deploying applications in cloud environments. Experience with DevOps technologies, including Automation, CI/CD, and Configuration Management. Hands-on experience with IaC, preferably using Terraform Very good knowledge of container technologies like Docker and Kubernetes. Knowledge of CI/CD pipelines, preferably using Jenkins, Argo Workflows, and experience automating build and deployment processes. Agile mindset, with a strong ability to collaborate in a cross-functional environment and mentor junior engineers. Excellent communication skills, with a commitment to clear, transparent, and proactive collaboration. Fluency in English (mandatory). Show more Show less
Posted 14 hours ago
6.0 years
0 Lacs
India
Remote
Location: Any metroplitan city Experience: 6+ Years Key Focus: Python, PostgreSQL, FastAPI, DevOps, CI/CD, AWS, Kubernetes, and Terraform. About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. We are looking for a Senior Python AWS Developer to join our team, enabling our customer success. Key Responsibilities Participate in solution investigation, estimations, planning, and alignment with other teams; Design, implement, and deliver new features for the Personalization Engine Partner with the product and design teams to understand user needs and translate them into high-quality content solutions and features. Promote and implement test automation (e.g, unit tests, integration tests) Build and maintain CI/CD pipelines for continuous integration, development, testing, and deployment. Deploy applications on the cloud using technologies such as Docker, Kubernetes, AWS, and Terraform. Work closely with the team in an agile and collaborative environment. This will involve code reviews, pair programming, knowledge sharing, and incident coordination. Maintain existing applications and reduce technical debt. Qualifications Must have: 6+ years of experience in software development is preferred Experience with Python Experience with PostgreSQL Good understanding of data structures and clean code Able to understand and apply design patterns You are interested in DevOps philosophy ο·Experience with FastAPI Willing to learn on the job Experience with relational and non-relational databases Empathetic and able to easily build relationships Good verbal and written communication skills Show more Show less
Posted 14 hours ago
0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
We are seeking an experienced Azure DevOps Engineer to manage and optimize our cloud infrastructure, CI/CD pipelines, version control, and platform automation. The ideal candidate will be responsible for ensuring efficient deployments, security compliance, and operational reliability. This role requires collaboration with development, QA, and DevOps teams to enhance software delivery and infrastructure management. Key Responsibilities: 1. Infrastructure Management β’ Design and manage Azure-based infrastructure for scalable and resilient applications. β’ Implement and manage Azure Container Apps to support microservices-based architecture. 2. CI/CD Pipelines β’ Build and maintain CI/CD pipelines using GitHub Actions or equivalent tools. β’ Automate deployment workflows to ensure quick and reliable application delivery. 3. Version Control and Collaboration β’ Manage GitHub repositories, branching strategies, and pull request workflows. β’ Ensure repository compliance and enforce best practices for source control. 4. Platform Automation β’ Develop scripts and tooling to automate repetitive tasks and improve efficiency. β’ Use Infrastructure as Code (IaC) tools like Terraform or Bicep for resource provisioning. 5. Monitoring and Optimization β’ Set up monitoring and alerting for platform reliability using Azure Monitor and Application Insights. β’ Analyze performance metrics and implement optimizations for cost and efficiency improvements. 6. Collaboration and Support β’ Work closely with development, DevOps, and QA teams to streamline deployment processes. β’ Troubleshoot and resolve issues in production and non-production environments. 7. GitHub Management β’ Manage GitHub repositories, including permissions, branch policies, and pull request workflows. β’ Implement GitHub Actions for automated testing, builds, and deployments. β’ Enforce security compliance through GitHub Advanced Security features (e.g., secret scanning, Dependabot). β’ Design and implement branching strategies to support collaborative software development. β’ Maintain GitHub templates for issues, pull requests, and contributing guidelines. β’ Monitor repository usage, optimize workflows, and ensure scalability of GitHub services. 8. Operational Support β’ Maintain pipeline health and resolve incidents related to deployment and infrastructure. β’ Address defects, validate certificates, and ensure platform consistency. β’ Resolve issues with offline services, manage private runners, and apply security patches. β’ Monitor page performance using tools like Lighthouse. β’ Manage server maintenance, repository infrastructure, and access control. 9. Pipeline Development β’ Develop reusable workflows for builds, deployments, SonarQube integrations, Jira integrations, release notes, notifications, and reporting. β’ Implement branching and versioning management strategies. β’ Identify pipeline failures and develop automated recovery mechanisms. β’ Customize configurations for various projects (Mobile, Leapfrog, AEM/Hybris). 10. Testing Integration β’ Implement automated testing, feedback loops, and quality gates. β’ Manage SonarQube configurations, rulesets, and runner maintenance. β’ Maintain SonarQube EE deployment in Azure Container Apps. β’ Configure and integrate security tools like Dependabot and Snyk with Jira. 11. Work Collaboration Integration β’ Integrate JIRA for automatic ticket generation, story validation, and release management. β’ Configure Teams for API management, channels, and chat management. β’ Set up email alerting mechanisms. β’ Support IFS/CR process integration. Required Skills & Qualifications: β’ Cloud Platforms: Azure (Azure Container Apps, Azure Monitor, Application Insights). β’ CI/CD Tools: GitHub Actions, Terraform, Bicep. β’ Version Control: GitHub repository management, branching strategies, pull request workflows. β’ Security & Compliance: GitHub Advanced Security, Dependabot, Snyk. β’ Automation & Scripting: Terraform, Bicep, Shell scripting. β’ Monitoring & Performance: Azure Monitor, Lighthouse. β’ Testing & Quality Assurance: SonarQube, Automated testing. β’ Collaboration Tools: JIRA, Teams, Email Alerting. Preferred Qualifications: β’ Experience in microservices architecture and containerized applications. β’ Strong understanding of DevOps methodologies and best practices. β’ Excellent troubleshooting skills for CI/CD pipelines and infrastructure issues. Show more Show less
Posted 14 hours ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Us: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytmβs mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. About the role: Your responsibility as a MYSQL database administrator (DBA) will be the performance, integrity and security of a database. You'll be involved in the planning and development of the database, as well as in troubleshooting any issues on behalf of the users. Requirement : - 3 to 6 Years Experience - MySQL, AWS RDS, AWS AURORA working knowledge is Must - Replication - AWS Admin - User Management - Machine Creation (Manual or by Terraform) - AMI creation - Backup and restoration Why join us: β A collaborative output driven program that brings cohesiveness across businesses through technology β Improve the average revenue per use by increasing the cross-sell opportunities β A solid 360 feedback from your peer teams on your support of their goals β Respect, that is earned, not demanded from your peers and manager Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants β and we are committed to it. Indiaβs largest digital lending story is brewing here. Itβs your opportunity to be a part of the story! Show more Show less
Posted 15 hours ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Linux Administrator Experience: 6+ Years Location: Chennai Mandatory: Linux, GCP, AWS JD: Experience: o 6+ years of experience in cloud security, with a focus on enterprise product software in the cloud. o At least 3+ years of hands-on experience with major cloud platforms (AWS, Microsoft Azure, or Google Cloud Platform). o Proven experience with securing enterprise software applications and cloud infrastructures. o Strong background in securing complex, large-scale software environments with a focus on infrastructure security, data security, and application security. o Hands-on experience with the OWASP Top 10 and integrating security measures into cloud applications. o Experience with Hybrid Cloud environments and securing workloads that span on-premises and public cloud platforms. Technical Skills: o In-depth experience with cloud service models (IaaS, PaaS, SaaS) and cloud security tools (e.g., AWS Security Hub, Azure Security Center, GCP Security Command Center). o Expertise in securing enterprise applications, including web services, APIs, and microservices deployed in the cloud. o Strong experience with network security, encryption techniques, IAM policies, security automation, and vulnerability management in cloud environments. o Familiarity with container security (Docker, Kubernetes) and serverless computing security. o Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or similar tools. o Knowledge of regulatory compliance requirements such as SOC 2, GDPR, HIPAA, and how they apply to enterprise software hosted in the cloud. Show more Show less
Posted 15 hours ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Java Developer - Software Engineer Experience: 4-9 Years Location: Chennai (HYBRID) Interview: F2F Mandatory: Java Spring Boot Microservice -React Js -AWS Cloud- DevOps- Node(Added Advantage) Job Description: Overall 4+ years of experience in Java Development Projects 3+Years of development experience in development with React 2+Years Of experience in AWS Cloud, Devops. Microservices development using Spring Boot Technical StackCore Java, Java, J2EE, Spring, MongoDB, GKE, Terraform, GitHub, GCP Developer, Kubernetes, Scala, Kafka Technical ToolsConfluence/Jira/Bitbucket or Git, CI / CD (Maven, Git, Jenkins), Eclipse or IntelliJ IDEA Experience in event-driven architectures (CQRS and SAGA patterns). Experience in Design patterns Build Tools (Gulp, Webpack), Jenkins, Docker, Automation, Bash, Redis, Elasticsearch, Kibana Technical Stack (UI)JavaScript, React JS, CSS/SCSS, HTML5, Git+ Show more Show less
Posted 15 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.
These cities are known for their strong tech presence and have a high demand for Terraform professionals.
The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.
In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.
Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.
plan
and apply
commands. (medium)As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2