Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
4 - 8 Lacs
Mumbai
Work from Office
Strong working knowledge of Oracle 19C & Golden Gate . Knowledge of 21C will be added advantage. Perform day-to-day administration of Oracle databases, including backup/recovery, Capacity planning and performance tuning. Very good knowledge about Oracle Architecture, RAC, High Availability and DR solution and OEM Configuring High Availability and DR solution with DGMGRL Upgrading oracle databases to latest versions Applying latest Fix-Packs as per requirements Monitor database performance and recommend improvements for operational efficiency Database performance tuning by configuring database parameters. Monitor the Oracle Alert logs files, transaction logs, archive logs and backup log Experience with Data-Guard, Cloning, RMAN backups, Golden Gate 19c & 23AI and point in time recovery, ASM Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise 5+ Years of Experiance in Oracle and Golden Gate. Should have Managed Service Experience in Banking Environment Should have Client and Team handling Experience Ready to support 24x7 environment Work from Office is mandatory Primary Job Location - Mumbai Andheri Preferred technical and professional experience Good To Have MySQL Admin Task expertise Day to day activities like TS, backup, cronjob, OEM alert
Posted 4 days ago
0.0 - 4.0 years
2 - 6 Lacs
Hubli, Mangaluru, Mysuru
Work from Office
Dr. Medcare is looking for Consultant - Medical Gastroenterology to join our dynamic team and embark on a rewarding career journey. Diagnose and treat patients according to established standards of best practice in GastroenterologyRegularly review results of all investigations and modify treatment as required. Comply with all established Hospital practices regarding consultations, patient care, discharge protocols, outpatient and follow up practices. Perform necessary procedures and obtain approval from the insurance company prior to performing the procedures. Accurately document all relevant patient information in a clear and timely fashion in accordance with the health record keeping policy. Communicate medical information to patients and the patients families.
Posted 4 days ago
1.0 - 2.0 years
3 - 5 Lacs
Vellore
Work from Office
Applications are invited for the Post of Junior Research Fellowship (JRF) for the DST_SERB Funded Project in the School of Electronics Engineering (SENSE), FILENO. CRG/2023/005678, Vellore Institute of Technology (VIT). Title of the Project : EEG-based investigation of the fundamental frequency coding for source segregation in elderly normal-hearing adults. Qualification : First division Master's Degree in Electronics and Communication Engineering or equivalent with specialization in Signal Processing or B.Tech. in ECE/EIE/EEE/ETC from a recognized University or equivalent. Desirable: GATE/NET qualified; however, non-GATE/NET candidates are also encouraged to apply. Describe if any Candidates must have strong knowledge of Signal Processing and MATLAB software. Additionally, knowledge of Deep Neural Networks will be given more preference. The candidate will acquire the neural representations of speech signals using the EEG device from human subjects (younger and older subjects), apply signal processing techniques to these neural representations to test the hypotheses, and will be responsible for writing project reports, papers, and coordinating project activities. Stipend : 31,000/- (for starting 6 months) and 35,000 (last year) per month for GATE/NET-qualified candidates . Sponsoring Agency : DST-SERB, Government of India. Duration : 1.5 years Principal Investigator : Dr. Anantha Krishna Chintanpalli , Professor in Communication Engineering, School of Electronics Engineering, Vellore Institute of Technology (VIT), Vellore - 632 014, Tamil Nadu. Co-Principal Investigator : Dr. R. Sivakumar Professor in Sensor and Biomedical Technology School of Electronics Engineering Vellore Institute of Technology (VIT) Vellore - 632 014, Tamil Nadu. Send your CV along with relevant documents about the details of qualifications, scientific accomplishments, experience (if any), and the latest passport-size photo on or before 20/06/2025. Apply online http://careers.vit.ac.in. No TA and DA will be paid for appearing in the interview. Shortlisted candidates will be called for an interview at a later date, which will be intimated by email. Candidates will be shortlisted based on their merit and as per the requirements of the project. The selected candidate will be expected to join on or before 1st August 2025. The selected candidate may be permitted to register for the PhD program of VIT Vellore subject to fulfilling the requirements of the PhD qualification process as per the institute norms.
Posted 5 days ago
7.0 - 10.0 years
22 - 27 Lacs
Hyderabad
Work from Office
Overview This role will focus on Azure SAP infra & architecture (HANA, HA/DR & Resiliency, SAP integrations & connectivity, externalizations), and lead SAP full stack framework, operation framework & procedures including monitoring/alerting, changes and incidents remediations and critical support for SAP landscapes which include PGT (PepsiCo Global Template) in Azure, PIRT (PepsiCo international reference template) and legacy SAP for global and international sectors including North America, Latam, Europe, AMESA, and APAC in 5 PepsiCo datacenters supporting critical mission critical processes including make/move/sell of PepsiCo Responsibilities Responsible for operational stability of SAP infrastructure for PGT, PIRT and legacy SAP with scalability, reliability, high availability, security, performance, cost effectiveness to handle PepsiCo's world class scale and complexity under guidance of senior SAP infra architect and engineers Play a key role in supporting full stack SAP infrastructure solutions including Azure and datacenter infrastructure, operating system, operating system clustering, storage, backup and recovery, HA/DR, HANA databases, SAP system infrastructure and Basis. Support detailed design, assist implementation and enhancement of PGT Azure infrastructure for scale-up and scale-out, drive continuous innovation with Microsoft and SAP to support PepsiCo's growing volume as part of PGT global rollout. Support detailed design, assist implementation of Azure infrastructure foundation to enable PIRT and legacy SAP migration from PepsiCo datacenters to Azure, and meet technical and non-technical requirements from sectors. Support a global multi-year strategy for migrating SAP infrastructure from PepsiCos data centers to public cloud and prepare for global SAP program demand and integration with digital platforms and legacy applications across sectors Participate SAP full stack framework / processes / team from all infrastructure domains to drive efficiency and quality through partnership with vendors including SAP, Microsoft, and SUSE, and managed service providers. Document procedures to operationalize SAP infrastructure and services through partnership with operations and managed services Participate activities to automate and streamline SAP build platform and process, and operation procedures including monitoring/alerting, changes and incidents remediation. Implement and execute SAP foundational infrastructure and automated services to enable digital transformation programs and cloud migrations on-time, on-budget for PGT environment as well as PIRT and legacy SAP migration. Define automation strategy to unlock productivity and to ensure transparent billing (tagging, metering, monitoring), and provide critical support for SAP infrastructure and technologies Qualifications Bachelors degree in technology or engineering 14 plus years of overall IT & Cloud experience 8 plus of experience of SAP, HANA DB, SAP Basis and Azure SAP Basis Administration including S/4 HANA SAP HA & DR & Architecture SAP HANA Database Administration SAP Azure Architecture, Deployments, Migrations and Administration SAP operational experience Skilled at collaborating across cross-functional teams and with a multicultural experience Teamwork and Leadership/Coaching capabilities
Posted 6 days ago
8.0 - 10.0 years
15 - 18 Lacs
Bengaluru
Work from Office
Position: Deputy Manager/Manager- IT Responsibilities and Duties: Cybersecurity and Compliance: • Implement and enforce security measures to protect organizational data and systems. • Ensure compliance with applicable data protection laws, standards, and regulations. • Conduct regular audits of IT systems and processes to identify vulnerabilities. Team Leadership and Development: • Lead, mentor, and evaluate the IT team to ensure efficient operation and professional growth. • Coordinate with internal departments to understand IT needs and deliver timely solutions. • Drive a culture of innovation and continuous improvement within the IT team. Vendor and Stakeholder Management: • Manage relationships with IT service providers and vendors. • Negotiate contracts and service level agreements (SLAs) to ensure value for money. • Liaise with stakeholders to communicate IT strategies, challenges, and achievements. Project Management: • Oversee the planning, execution, and delivery of IT projects. • Monitor project timelines, budgets, and resources to ensure successful completion. • Document project progress and provide regular updates to leadership. Requirements and skills: Technical Expertise & Hands on experience in: • MS Exchange, O365, Azure • Active Directory, Domain controllers, DNS, DHCP, Group Policy • ADFS • Backup Solutions (Veeam/Commvault) • BCP & DR • ISMS best practices and ISO 27001 • Qualification : B Tech/BE in CS/Electrical or Electronics • Experience : Experience in IT infra support & IT Ops with 9-10 years with at least 3 years in a managerial role. • Location : Bangalore
Posted 6 days ago
5.0 - 9.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 9+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
5.0 - 9.0 years
12 Lacs
Hyderabad
Work from Office
Dear Candidate, We are seeking a skilled and experienced DBA Data Engineer to join our growing data team. The ideal candidate will play a key role in designing, implementing, and maintaining our databases and data pipeline architecture. You will collaborate with software engineers, data analysts, and DevOps teams to ensure efficient data flow, data integrity, and optimal database performance across all systems. This role requires a strong foundation in database administration, SQL performance tuning, data modeling, and experience with both on-prem and cloud-based environments. Requirements: Bachelors degree in Computer Science, Information Systems, or a related field. 5+ years of experience in database administration and data engineering. Proven expertise in RDBMS ( Oracle , MySQL , and PostgreSQL ) and NoSQL systems (MongoDB, Cassandra). Experience managing databases in cloud environments ( AWS , Azure , or GCP ). Proficiency in ETL processes and tools (e.g., Apache NiFi, Talend, Informatica, AWS Glue). Strong experience with scripting languages such as Python , Bash , or PowerShell . DBA Data Engineer Roles & Responsibilities: Design and maintain scalable and high-performance database architectures. Monitor and optimize database performance using tools like CloudWatch , Oracle Enterprise Manager, pgAdmin , Mongo Compass , or Dynatrace . Develop and manage ETL/ELT pipelines to support business intelligence and analytics. Ensure data integrity and security through best practices in backup, recovery, and encryption. Automate regular database maintenance tasks using scripting and scheduled jobs. Implement high availability, failover, and disaster recovery strategies. Conduct performance tuning of queries, stored procedures, indexes, and table structures. Collaborate with DevOps to automate database deployments using CI/CD and IaC tools (e.g., Terraform, AWS CloudFormation). Design and implement data models, including star/snowflake schemas for data warehousing. Document data flows, data dictionaries, and database configurations. Manage user access and security policies using IAM roles or database-native permissions. Analyze existing data systems and propose modernization or migration plans (on-prem to cloud, SQL to NoSQL , etc.). Use AWS RDS , Amazon Redshift , Azure SQL Database , or Google BigQuery as needed. Stay up-to-date with emerging database technologies and make recommendations. Must-Have Skills: Deep knowledge of SQL and database performance tuning. Hands-on experience with database migrations and replication strategies. Familiarity with data governance, data quality, and compliance frameworks ( GDPR , HIPAA , etc.). Strong problem-solving and troubleshooting skills. Experience with data streaming platforms such as Apache Kafka, AWS Kinesis , or Apache Flink is a plus. Experience with data lake and data warehouse architectures. Excellent communication and documentation skills. Soft Skills: Problem-Solving: Ability to analyze complex problems and develop effective solutions. Communication Skills: Strong verbal and written communication skills to effectively collaborate with cross-functional teams. Analytical Thinking: Ability to think critically and analytically to solve technical challenges. Time Management: Capable of managing multiple tasks and deadlines in a fast-paced environment. Adaptability: Ability to quickly learn and adapt to new technologies and methodologies. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
3.0 - 6.0 years
5 - 13 Lacs
Mumbai
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to embark on a technical adventure and become a hero to our external and internal users? As Resiliency Orchestration (RO) Administrator at Kyndryl, you'll be part of an elite team that provides exceptional technical assistance, enabling our clients to achieve their desired business outcomes. As a Resiliency Orchestration (RO) Administrator at Kyndryl, you will be responsible for coordinating with Application team members and respective Bank team members to identify deviations and support till closure. You will also coordinate and support respective Subject Matter Experts (SMEs) till closure. Manage incident management of DR activities. Additionally, you will coordinate with the RO Administration team and manage documentation for changes to be done in RO. Maintain the BCP-DR Application Architecture and understanding of customer IT-DR for On-Prime/Off-Prime/Hybrid Infrastructure for the application. You'll be responsible to create a comprehensive disaster recovery plan that outlines strategies, procedures, and responsibilities for recovering systems and data in various disaster scenarios. Regularly review and update the disaster recovery plan to reflect changes in the organization's infrastructure, business processes, and technology. You will assess potential risks and vulnerabilities to the organization's IT systems and infrastructure, conduct a Business Impact Analysis to identify critical business functions, data, and systems, and determine their recovery priorities. You'll be the go-to person for our customers to define Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) for different business functions and systems, establish metrics to measure the effectiveness and efficiency of the disaster recovery processes, and organize and conduct regular disaster recovery drills and tests to validate the effectiveness of the recovery plan and identify areas for improvement. With your passion for technology, you'll provide world-class support that exceeds customer expectations. As a key member of the RO team, you will continuously monitor systems for potential signs of disaster or impending failures, respond to and coordinate incident response efforts in the event of a disaster or disruptive event, and keep management and stakeholders informed about the status of disaster recovery preparedness, including risks, progress, and improvements. You will also be responsible for designing and building LLD, HLD, and Implementation plans, as well as creating and maintaining technical reports, PPTS, and other documentation. If you're a technical wizard, a customer service superstar, and have an unquenchable thirst for knowledge, we want you to join our team. Your Future at Kyndryl Imagine being part of a dynamic team that values your growth and development. As Technical Support at Kyndryl, you'll receive an extensive and diverse set of technical trainings, including cloud technology, and free certifications to enhance your skills and expertise. You'll have the opportunity to pursue a career in advanced technical roles and beyond – taking your future to the next level. With Kyndryl, the sky's the limit. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 5+ years of experience in Customer Service or Technical Support. Experience in disaster recovery management and DR tools (MANDATORY) Experience in Scripting perl and TCL/Shell/Batch/PowerShell/Expect Scripts or similar scripting languages is must. Knowledge of writing scripts to integrate with different technologies using CLI's /API's. Working knowledge with Linux and any database (Oracle, MySQL). Strong understanding of the Data Protection (Back-up & Recovery, BCP DR, Storage Replication, Database Native Replications, Data Archival & Retention) for application workloads such as MS SQL, Exchange, Oracle, VMware, Hyper-V, azure, AWS etc. Extremely good hands-on experience with Standalone and Clustered UNIX ( AIX/Solaris/HPUX/RHEL/etc .) and windows platform. Preferred Technical and Professional Experience Should be able to Understand and strong knowledge of any Storage Replication technology with various DR Scenarios. Application testing experience may be added advantage Overall IT Infrastructure understanding is an added advantage Cyber (IT) Security related experience is an added advantage Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 week ago
7.0 - 12.0 years
4 - 9 Lacs
Pune
Work from Office
Primary Skills : SharePoint : Extensive experience with SharePoint server configurations, including farm setup, architecture, administration, and troubleshooting. Experience with both SharePoint on-premises and SharePoint Online. Troubleshooting: Strong troubleshooting skills in both SharePoint and SQL Server environments . Azure : Hands-on experience with Azure-hosted environments, particularly for SharePoint deployments, including Front-End, Application, and SQL Database Servers. Farm Configuration : Knowledge of SharePoint farm architecture, managing multiple SharePoint environments (production, staging, DR, and development). API Integrations : Experience with .Net APIs and integration processes within SharePoint and other systems. Workflows : In-depth understanding and experience with SharePoint workflows (e.g., SharePoint Designer, K2 or Power Automate). Disaster Recovery (DR) : Strong experience with setting up and managing DR farms on Azure, including failover strategies, backup, and recovery. Proficiency in handling multiple websites hosted on different farms and ensuring their availability and performance. Secondary Skills: PowerShell: Advanced PowerShell scripting skills for SharePoint administration, automation, and reporting. Security: Knowledge of SharePoint security configurations, permissions, and user access controls. Version Control & Upgrades: Experience with SharePoint version upgrades and patches. Backup and Recovery: Expertise in SharePoint backup strategies and maintaining a high level of availability for SharePoint services. SQL Server : Expertise in SQL Server (20162019), particularly in Always On Availability Groups, database maintenance, performance tuning, and troubleshooting. Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). Microsoft Certified: SharePoint Server Administration or related Microsoft certifications (e.g., Azure certifications) preferred. 5+ years of experience in SharePoint administration, with a focus on large, complex farms. Experience in managing multi-farm environments on Azure, including DR and Staging setups. Proven experience in SQL Server database administration, including Always On Availability Groups. Hands-on experience with SharePoint workflows, .Net API integrations, and website management. Experience in troubleshooting and resolving service requests.
Posted 1 week ago
6.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
6.0 - 9.0 years
12 Lacs
Hyderabad
Work from Office
Dear Candidate, We are seeking a skilled and experienced DBA Data Engineer to join our growing data team. The ideal candidate will play a key role in designing, implementing, and maintaining our databases and data pipeline architecture. You will collaborate with software engineers, data analysts, and DevOps teams to ensure efficient data flow, data integrity, and optimal database performance across all systems. This role requires a strong foundation in database administration, SQL performance tuning, data modeling, and experience with both on-prem and cloud-based environments. Requirements: Bachelors degree in Computer Science, Information Systems, or a related field. 6+ years of experience in database administration and data engineering. Proven expertise in RDBMS ( Oracle , MySQL , and PostgreSQL ) and NoSQL systems (MongoDB, Cassandra). Experience managing databases in cloud environments ( AWS , Azure , or GCP ). Proficiency in ETL processes and tools (e.g., Apache NiFi, Talend, Informatica, AWS Glue). Strong experience with scripting languages such as Python , Bash , or PowerShell . DBA Data Engineer Roles & Responsibilities: Design and maintain scalable and high-performance database architectures. Monitor and optimize database performance using tools like CloudWatch , Oracle Enterprise Manager, pgAdmin , Mongo Compass , or Dynatrace . Develop and manage ETL/ELT pipelines to support business intelligence and analytics. Ensure data integrity and security through best practices in backup, recovery, and encryption. Automate regular database maintenance tasks using scripting and scheduled jobs. Implement high availability, failover, and disaster recovery strategies. Conduct performance tuning of queries, stored procedures, indexes, and table structures. Collaborate with DevOps to automate database deployments using CI/CD and IaC tools (e.g., Terraform, AWS CloudFormation). Design and implement data models, including star/snowflake schemas for data warehousing. Document data flows, data dictionaries, and database configurations. Manage user access and security policies using IAM roles or database-native permissions. Analyze existing data systems and propose modernization or migration plans (on-prem to cloud, SQL to NoSQL , etc.). Use AWS RDS , Amazon Redshift , Azure SQL Database , or Google BigQuery as needed. Stay up-to-date with emerging database technologies and make recommendations. Must-Have Skills: Deep knowledge of SQL and database performance tuning. Hands-on experience with database migrations and replication strategies. Familiarity with data governance, data quality, and compliance frameworks ( GDPR , HIPAA , etc.). Strong problem-solving and troubleshooting skills. Experience with data streaming platforms such as Apache Kafka, AWS Kinesis , or Apache Flink is a plus. Experience with data lake and data warehouse architectures. Excellent communication and documentation skills. Soft Skills: Problem-Solving: Ability to analyze complex problems and develop effective solutions. Communication Skills: Strong verbal and written communication skills to effectively collaborate with cross-functional teams. Analytical Thinking: Ability to think critically and analytically to solve technical challenges. Time Management: Capable of managing multiple tasks and deadlines in a fast-paced environment. Adaptability: Ability to quickly learn and adapt to new technologies and methodologies. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
6.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
6.0 - 9.0 years
12 Lacs
Hyderabad
Work from Office
Dear Candidate, We are seeking a skilled and experienced DBA Data Engineer to join our growing data team. The ideal candidate will play a key role in designing, implementing, and maintaining our databases and data pipeline architecture. You will collaborate with software engineers, data analysts, and DevOps teams to ensure efficient data flow, data integrity, and optimal database performance across all systems. This role requires a strong foundation in database administration, SQL performance tuning, data modeling, and experience with both on-prem and cloud-based environments. Requirements: Bachelors degree in Computer Science, Information Systems, or a related field. 6+ years of experience in database administration and data engineering. Proven expertise in RDBMS ( Oracle , MySQL , and PostgreSQL ) and NoSQL systems (MongoDB, Cassandra). Experience managing databases in cloud environments ( AWS , Azure , or GCP ). Proficiency in ETL processes and tools (e.g., Apache NiFi, Talend, Informatica, AWS Glue). Strong experience with scripting languages such as Python , Bash , or PowerShell . DBA Data Engineer Roles & Responsibilities: Design and maintain scalable and high-performance database architectures. Monitor and optimize database performance using tools like CloudWatch , Oracle Enterprise Manager, pgAdmin , Mongo Compass , or Dynatrace . Develop and manage ETL/ELT pipelines to support business intelligence and analytics. Ensure data integrity and security through best practices in backup, recovery, and encryption. Automate regular database maintenance tasks using scripting and scheduled jobs. Implement high availability, failover, and disaster recovery strategies. Conduct performance tuning of queries, stored procedures, indexes, and table structures. Collaborate with DevOps to automate database deployments using CI/CD and IaC tools (e.g., Terraform, AWS CloudFormation). Design and implement data models, including star/snowflake schemas for data warehousing. Document data flows, data dictionaries, and database configurations. Manage user access and security policies using IAM roles or database-native permissions. Analyze existing data systems and propose modernization or migration plans (on-prem to cloud, SQL to NoSQL , etc.). Use AWS RDS , Amazon Redshift , Azure SQL Database , or Google BigQuery as needed. Stay up-to-date with emerging database technologies and make recommendations. Must-Have Skills: Deep knowledge of SQL and database performance tuning. Hands-on experience with database migrations and replication strategies. Familiarity with data governance, data quality, and compliance frameworks ( GDPR , HIPAA , etc.). Strong problem-solving and troubleshooting skills. Experience with data streaming platforms such as Apache Kafka, AWS Kinesis , or Apache Flink is a plus. Experience with data lake and data warehouse architectures. Excellent communication and documentation skills. Soft Skills: Problem-Solving: Ability to analyze complex problems and develop effective solutions. Communication Skills: Strong verbal and written communication skills to effectively collaborate with cross-functional teams. Analytical Thinking: Ability to think critically and analytically to solve technical challenges. Time Management: Capable of managing multiple tasks and deadlines in a fast-paced environment. Adaptability: Ability to quickly learn and adapt to new technologies and methodologies. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
12.0 - 18.0 years
25 - 30 Lacs
Hyderabad
Work from Office
Job Title: Lead- IT Infra Support & Services- (PE- Grade) JOB PURPOSE Responsible for the planning, implementation, maintenance, and support of physical and virtual infrastructure within the data center. This role ensures high availability, security, and optimal performance of servers, storage, networking, power, and cooling systems while adhering to best practices and compliance standards. Maintain the IT infrastructure layers at GHIAL including but not limited to the Networks, Data Center, Information Security, Subsystem servers / compute layer to ensure service availability. To ensure confidentiality, Integrity and Availability of Data and Information Systems. Responsible for IT Infrastructure life cycle management to ensure the timely upgrades and refreshment of infrastructure without any services impact for Airport Operations. To engage with the outsourcing partners [Internally] towards the infrastructure related service delivery management for GHIAL ecosystem and users. ORGANISATION CHART KEY ACCOUNTABILITIES Accountabilities Incident Management: • Manage and optimize on-prem cloud infrastructure (VMware, OpenStack, Hyper-V, etc.) and virtualization platforms. • Define and govern SLAs, SLOs, and KPIs for on-prem cloud service delivery. • Collaborate with network, storage, and security teams to maintain end-to-end service reliability. • Lead compliance checks and audits related to security, backup, DR, and configuration baselines. • Manage service catalog offerings for internal & external consumers, ensuring appropriate access and governance. • Regularly report SLA metrics, service health, and risk factors to leadership. • Serve as point of escalation for critical service issues related to the on-prem cloud environment. I mplementing New Requirements / Change Management: • Oversee service request, incident, and change management processes for cloud resources. • To Identify/understand the new/ change requirements, risk and impact, necessity and priority to recommend, approve or reject new requirements /changes and to plan, implement, review the same pertaining to Networks, Security, Communications. Performance Management: • To implement Mechanisms to monitor the performance of all the cloud infra devices and to submit performance dashboards with IT HOD, and to ensure the performance of all the devices/services to be at the acceptable levels. • Ability to manage all the Refresh of IT systems with upgrades and latest technologies. Capacity Management: • Drive capacity planning, Utilization, forecasting, and optimization for compute, memory, and storage. • To plan for capacity consolidation and upgrade against various drivers and initiatives, concerned on-prim environment Cyber Security Management: • To Identify, Recommend, Implement and Maintain the necessary security enforcement and monitoring solutions to protect the operational environment from external and internal security threats. • To build a team of professionals capable of working with minimal guidance to identify, respond and resolve security issues in day-to-day operations. Team Management: • To guide, support, mentor and review the team to achieve synergy and desired performance levels. To Monitor and Review the performance of vendor and Outsourced Employees • Recruiting and induct new team members as per the requirements along with HR Teams BCP Testing and High Availability Management Configuration Management Backup and Restore Technology Upgrade Plan and Implementation Setting Up and Managing Security Operations Centre (In-House) Application/Solution Development Management KEY ACCOUNTABILITIES - Additional Details EXTERNAL INTERACTIONS External Role needs to interact with outside the organization to enable success in your day-to-day work Vendors: Planning, Design and Implementation of Various Solutions as a part of Change Management and Service Continuity. Support Escalations for service stability and incident resolution Airlines: Understanding New /Change Requests, timelines, Risk and Impact. Coordinating for Planned Downtime Approvals from all Stakeholders. Resolution/Escalation of Any Security Related Issues and Policy Violations. Addressing Stakeholders’ concerns regarding service availability and Quality. Concessionaries: Understanding New Change Requests, timelines, Risk and Impact. Coordinating for Planned Downtime Approvals from all Stakeholders. Resolution/Escalation of Any Security Related Issues and Policy Violations. Addressing Stakeholders’ concerns regarding service availability and Quality. Service Providers: incident and performance management, capacity planning/upgrade Others: Addressing Stakeholders’ concerns regarding service availability and Quality. Govt. & Statutory: Maintenance and Upgrade of “License to Implement and Operate” various Communication Systems INTERNAL INTERACTIONS Internal – Role needs to interact with inside the organization to enable success in your day-to-day work Business Team: Validating and Finalizing the New Service requests and feasibility approvals, Preparing Proposals for New/Existing Service offerings/Changes. Reviewing Service Offerings, Customer Feedbacks, operating expenses and Service Costing Project Mgmt. Team: Understand the New Changes, Impact, Cost, Timelines and support the new initiatives, modifications, at various stages of ongoing and planned projects Terminal Operations team: Review the Levels of Service Quality at Various Locations of the Airport for services Like PA Systems etc. and recommend and implement appropriate measures to maintain the desired QoS. Review the Stakeholder feedback pertaining to the IT service Offerings to Passenger Community and to Recommend and Implement appropriate measures to improve and sustain ASQ ratings of the concerned service. Infra Support Team: Coordinate for New / Change Request Implementation, Risk and Impact Analysis, Review of Major Change implementations, Major Incident Handling and Process Reviews. FINANCIAL DIMENSIONS These should be quantifiable numerical amounts like annual budgets, project costs, annual revenue, purchase value etc. Shall be responsible for managing infrastructure worth 5-10 crores on an ongoing basis and capex on a need basis (which can vary depending upon projects undertaken) OTHER DIMENSIONS Service delivery management for Enterprise & Airport Infra – based on SLA. (Team size would be around 12 -15 people onsite. Other services shall be based upon shared services framework) EDUCATION QUALIFICATIONS BTech with MBA or MCA RELEVANT EXPERIENCE 12-15 years of experience in IT infrastructure/cloud operations, with 5+ years in managing private cloud environments.
Posted 1 week ago
9.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 9 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
9.0 - 10.0 years
12 Lacs
Hyderabad
Work from Office
Dear Candidate, We are seeking a skilled and experienced DBA Data Engineer to join our growing data team. The ideal candidate will play a key role in designing, implementing, and maintaining our databases and data pipeline architecture. You will collaborate with software engineers, data analysts, and DevOps teams to ensure efficient data flow, data integrity, and optimal database performance across all systems. This role requires a strong foundation in database administration, SQL performance tuning, data modeling, and experience with both on-prem and cloud-based environments. Requirements: Bachelors degree in Computer Science, Information Systems, or a related field. 9+ years of experience in database administration and data engineering. Proven expertise in RDBMS ( Oracle , MySQL , and PostgreSQL ) and NoSQL systems (MongoDB, Cassandra). Experience managing databases in cloud environments ( AWS , Azure , or GCP ). Proficiency in ETL processes and tools (e.g., Apache NiFi, Talend, Informatica, AWS Glue). Strong experience with scripting languages such as Python , Bash , or PowerShell . DBA Data Engineer Roles & Responsibilities: Design and maintain scalable and high-performance database architectures. Monitor and optimize database performance using tools like CloudWatch , Oracle Enterprise Manager, pgAdmin , Mongo Compass , or Dynatrace . Develop and manage ETL/ELT pipelines to support business intelligence and analytics. Ensure data integrity and security through best practices in backup, recovery, and encryption. Automate regular database maintenance tasks using scripting and scheduled jobs. Implement high availability, failover, and disaster recovery strategies. Conduct performance tuning of queries, stored procedures, indexes, and table structures. Collaborate with DevOps to automate database deployments using CI/CD and IaC tools (e.g., Terraform, AWS CloudFormation). Design and implement data models, including star/snowflake schemas for data warehousing. Document data flows, data dictionaries, and database configurations. Manage user access and security policies using IAM roles or database-native permissions. Analyze existing data systems and propose modernization or migration plans (on-prem to cloud, SQL to NoSQL , etc.). Use AWS RDS , Amazon Redshift , Azure SQL Database , or Google BigQuery as needed. Stay up-to-date with emerging database technologies and make recommendations. Must-Have Skills: Deep knowledge of SQL and database performance tuning. Hands-on experience with database migrations and replication strategies. Familiarity with data governance, data quality, and compliance frameworks ( GDPR , HIPAA , etc.). Strong problem-solving and troubleshooting skills. Experience with data streaming platforms such as Apache Kafka, AWS Kinesis , or Apache Flink is a plus. Experience with data lake and data warehouse architectures. Excellent communication and documentation skills. Soft Skills: Problem-Solving: Ability to analyze complex problems and develop effective solutions. Communication Skills: Strong verbal and written communication skills to effectively collaborate with cross-functional teams. Analytical Thinking: Ability to think critically and analytically to solve technical challenges. Time Management: Capable of managing multiple tasks and deadlines in a fast-paced environment. Adaptability: Ability to quickly learn and adapt to new technologies and methodologies. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
7.0 - 12.0 years
5 - 15 Lacs
Bengaluru
Work from Office
Azure Cloud Architecture, Databricks Administration, Azure Networking, Global Load Balancing, HA/DR, Fault Tolerance Required Skills Terraform, DR Drill, PowerBI
Posted 1 week ago
7.0 - 9.0 years
12 Lacs
Hyderabad
Work from Office
Dear Candidate, We are seeking a skilled and experienced DBA Data Engineer to join our growing data team. The ideal candidate will play a key role in designing, implementing, and maintaining our databases and data pipeline architecture. You will collaborate with software engineers, data analysts, and DevOps teams to ensure efficient data flow, data integrity, and optimal database performance across all systems. This role requires a strong foundation in database administration, SQL performance tuning, data modeling, and experience with both on-prem and cloud-based environments. Requirements: Bachelors degree in Computer Science, Information Systems, or a related field. 7+ years of experience in database administration and data engineering. Proven expertise in RDBMS ( Oracle , MySQL , and PostgreSQL ) and NoSQL systems (MongoDB, Cassandra). Experience managing databases in cloud environments ( AWS , Azure , or GCP ). Proficiency in ETL processes and tools (e.g., Apache NiFi, Talend, Informatica, AWS Glue). Strong experience with scripting languages such as Python , Bash , or PowerShell . DBA Data Engineer Roles & Responsibilities: Design and maintain scalable and high-performance database architectures. Monitor and optimize database performance using tools like CloudWatch , Oracle Enterprise Manager, pgAdmin , Mongo Compass , or Dynatrace . Develop and manage ETL/ELT pipelines to support business intelligence and analytics. Ensure data integrity and security through best practices in backup, recovery, and encryption. Automate regular database maintenance tasks using scripting and scheduled jobs. Implement high availability, failover, and disaster recovery strategies. Conduct performance tuning of queries, stored procedures, indexes, and table structures. Collaborate with DevOps to automate database deployments using CI/CD and IaC tools (e.g., Terraform, AWS CloudFormation). Design and implement data models, including star/snowflake schemas for data warehousing. Document data flows, data dictionaries, and database configurations. Manage user access and security policies using IAM roles or database-native permissions. Analyze existing data systems and propose modernization or migration plans (on-prem to cloud, SQL to NoSQL , etc.). Use AWS RDS , Amazon Redshift , Azure SQL Database , or Google BigQuery as needed. Stay up-to-date with emerging database technologies and make recommendations. Must-Have Skills: Deep knowledge of SQL and database performance tuning. Hands-on experience with database migrations and replication strategies. Familiarity with data governance, data quality, and compliance frameworks ( GDPR , HIPAA , etc.). Strong problem-solving and troubleshooting skills. Experience with data streaming platforms such as Apache Kafka, AWS Kinesis , or Apache Flink is a plus. Experience with data lake and data warehouse architectures. Excellent communication and documentation skills. Soft Skills: Problem-Solving: Ability to analyze complex problems and develop effective solutions. Communication Skills: Strong verbal and written communication skills to effectively collaborate with cross-functional teams. Analytical Thinking: Ability to think critically and analytically to solve technical challenges. Time Management: Capable of managing multiple tasks and deadlines in a fast-paced environment. Adaptability: Ability to quickly learn and adapt to new technologies and methodologies. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
8.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 8 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
7.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
6.0 - 8.0 years
12 Lacs
Hyderabad
Work from Office
Dear Candidate, We are seeking a skilled and experienced DBA Data Engineer to join our growing data team. The ideal candidate will play a key role in designing, implementing, and maintaining our databases and data pipeline architecture. You will collaborate with software engineers, data analysts, and DevOps teams to ensure efficient data flow, data integrity, and optimal database performance across all systems. This role requires a strong foundation in database administration, SQL performance tuning, data modeling, and experience with both on-prem and cloud-based environments. Requirements: Bachelors degree in Computer Science, Information Systems, or a related field. 6+ years of experience in database administration and data engineering. Proven expertise in RDBMS ( Oracle , MySQL , and PostgreSQL ) and NoSQL systems (MongoDB, Cassandra). Experience managing databases in cloud environments ( AWS , Azure , or GCP ). Proficiency in ETL processes and tools (e.g., Apache NiFi, Talend, Informatica, AWS Glue). Strong experience with scripting languages such as Python , Bash , or PowerShell . DBA Data Engineer Roles & Responsibilities: Design and maintain scalable and high-performance database architectures. Monitor and optimize database performance using tools like CloudWatch , Oracle Enterprise Manager, pgAdmin , Mongo Compass , or Dynatrace . Develop and manage ETL/ELT pipelines to support business intelligence and analytics. Ensure data integrity and security through best practices in backup, recovery, and encryption. Automate regular database maintenance tasks using scripting and scheduled jobs. Implement high availability, failover, and disaster recovery strategies. Conduct performance tuning of queries, stored procedures, indexes, and table structures. Collaborate with DevOps to automate database deployments using CI/CD and IaC tools (e.g., Terraform, AWS CloudFormation). Design and implement data models, including star/snowflake schemas for data warehousing. Document data flows, data dictionaries, and database configurations. Manage user access and security policies using IAM roles or database-native permissions. Analyze existing data systems and propose modernization or migration plans (on-prem to cloud, SQL to NoSQL , etc.). Use AWS RDS , Amazon Redshift , Azure SQL Database , or Google BigQuery as needed. Stay up-to-date with emerging database technologies and make recommendations. Must-Have Skills: Deep knowledge of SQL and database performance tuning. Hands-on experience with database migrations and replication strategies. Familiarity with data governance, data quality, and compliance frameworks ( GDPR , HIPAA , etc.). Strong problem-solving and troubleshooting skills. Experience with data streaming platforms such as Apache Kafka, AWS Kinesis , or Apache Flink is a plus. Experience with data lake and data warehouse architectures. Excellent communication and documentation skills. Soft Skills: Problem-Solving: Ability to analyze complex problems and develop effective solutions. Communication Skills: Strong verbal and written communication skills to effectively collaborate with cross-functional teams. Analytical Thinking: Ability to think critically and analytically to solve technical challenges. Time Management: Capable of managing multiple tasks and deadlines in a fast-paced environment. Adaptability: Ability to quickly learn and adapt to new technologies and methodologies. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
3.0 - 8.0 years
13 - 23 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
Hiring Oracle Analytics Cloud (OAC)/FDI Specialists (3–8 yrs) for Hybrid roles across USI (Hyd, BLR, Mum, Gur, Pune, Chennai, Kolkata). Must have strong OAC, SQL/PLSQL, RPD, Data Viz, ODI, Oracle Cloud skills. Immediate joiners preferred.
Posted 1 week ago
6.0 - 9.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 9+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
3.0 - 6.0 years
5 - 13 Lacs
Mumbai
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to embark on a technical adventure and become a hero to our external and internal users? As Resiliency Orchestration (RO) Administrator at Kyndryl, you'll be part of an elite team that provides exceptional technical assistance, enabling our clients to achieve their desired business outcomes. As a Resiliency Orchestration (RO) Administrator at Kyndryl, you will be responsible for coordinating with Application team members and respective Bank team members to identify deviations and support till closure. You will also coordinate and support respective Subject Matter Experts (SMEs) till closure. Manage incident management of DR activities. Additionally, you will coordinate with the RO Administration team and manage documentation for changes to be done in RO. Maintain the BCP-DR Application Architecture and understanding of customer IT-DR for On-Prime/Off-Prime/Hybrid Infrastructure for the application. You'll be responsible to create a comprehensive disaster recovery plan that outlines strategies, procedures, and responsibilities for recovering systems and data in various disaster scenarios. Regularly review and update the disaster recovery plan to reflect changes in the organization's infrastructure, business processes, and technology. You will assess potential risks and vulnerabilities to the organization's IT systems and infrastructure, conduct a Business Impact Analysis to identify critical business functions, data, and systems, and determine their recovery priorities. You'll be the go-to person for our customers to define Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) for different business functions and systems, establish metrics to measure the effectiveness and efficiency of the disaster recovery processes, and organize and conduct regular disaster recovery drills and tests to validate the effectiveness of the recovery plan and identify areas for improvement. With your passion for technology, you'll provide world-class support that exceeds customer expectations. As a key member of the RO team, you will continuously monitor systems for potential signs of disaster or impending failures, respond to and coordinate incident response efforts in the event of a disaster or disruptive event, and keep management and stakeholders informed about the status of disaster recovery preparedness, including risks, progress, and improvements. You will also be responsible for designing and building LLD, HLD, and Implementation plans, as well as creating and maintaining technical reports, PPTS, and other documentation. If you're a technical wizard, a customer service superstar, and have an unquenchable thirst for knowledge, we want you to join our team. Your Future at Kyndryl Imagine being part of a dynamic team that values your growth and development. As Technical Support at Kyndryl, you'll receive an extensive and diverse set of technical trainings, including cloud technology, and free certifications to enhance your skills and expertise. You'll have the opportunity to pursue a career in advanced technical roles and beyond – taking your future to the next level. With Kyndryl, the sky's the limit. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 5+ years of experience in Customer Service or Technical Support. Experience in disaster recovery management and DR tools (MANDATORY) Experience in Scripting perl and TCL/Shell/Batch/PowerShell/Expect Scripts or similar scripting languages is must. Knowledge of writing scripts to integrate with different technologies using CLI's /API's. Working knowledge with Linux and any database (Oracle, MySQL). Strong understanding of the Data Protection (Back-up & Recovery, BCP DR, Storage Replication, Database Native Replications, Data Archival & Retention) for application workloads such as MS SQL, Exchange, Oracle, VMware, Hyper-V, azure, AWS etc. Extremely good hands-on experience with Standalone and Clustered UNIX ( AIX/Solaris/HPUX/RHEL/etc .) and windows platform. Preferred Technical and Professional Experience Should be able to Understand and strong knowledge of any Storage Replication technology with various DR Scenarios. Application testing experience may be added advantage Overall IT Infrastructure understanding is an added advantage Cyber (IT) Security related experience is an added advantage Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 2 weeks ago
15.0 - 20.0 years
11 - 15 Lacs
Bengaluru
Work from Office
insightsoftware is a global provider of reporting, analytics, and performance management solutions that unlock the potential of business data and transform the way finance and data teams operate. We empower leaders from over 32,000 organizations to make timely and intelligent decisions. Our comprehensive solutions span Financial Planning and Analysis (FP&A), Controllership, and Data and Analytics. We deliver finance teams the insights required to navigate any economic climate and drive greater financial intelligence, while increasing productivity, visibility, accuracy, and compliance. Learn more at Were looking for a talented Senior DevOps Engineering Manager to lead more than two DevOps teams responsible for developing, supporting, and maintaining our class-leading suite of Enterprise products. The chosen candidate must be a self-starter, possess great organizational skills, and have excellent communication abilities. A proven results-oriented person with a delivery focus and demonstrated ability to achieve stretch goals in a highly innovative and fast-paced environment. We enjoy our work as much as we enjoy working together and want Senior DevOps Managers who can get things done while having a positive influence on our workplace environment. The successful candidate must have a passion for development, deeply care about code quality, and be committed to continuous improvement. Responsibilities: Drive a vision of Infrastructure as Code (IaC) within the product. Experience with Hands-on design and implementation of the Devops orchestration from ground up from the higlevel backlog and help the execution with devops team members. Develop best practices for team and also responsible for the architecture and technical leadership of the entire DevOps infrastructure. Lead a team of talented and high-impact DevOps engineers, providing cultural, technical, and hands-on leadership. Work closely with the Product teams to develop the best technical design and approach for product deployments within our Azure, GCP & AWS cloud environment. Translate complex functional and technical requirements into detailed project plans and schedules; manage the day-to-day activities of the DevOps team by defining, implementing, and maintaining a coherent, progressive deployment strategy for various SaaS products. Understand requirements and guide the team in building reusable code Management of departmental resources, staffing, and enhancing and maintaining a best-of-class DevOps team Build and maintain the production Infrastructure and services within AWS\GCP\Azure. Resolve escalations arising from operations and work with various stakeholder teams and leaders to solve problems in an incremental way. Collaborate with management and product teams to plan, estimate, and prioritize roadmap objective. Drive and enhance\implement incident resolution processes. Participate in on-call rotations for mission-critical production functions. Develop/maintain processes, tools, and documentation in support of production. Deploy applications in an automated fashion by creating and managing CI/CD pipelines within Jenkins\Azure DevOps. Oversee our code repository structures and branching strategy/implementation. Oversee the design, implementation, and management of CI/CD pipelines, automated testing, and deployment frameworks. Collaborate with security teams to implement robust security practices and ensure compliance with relevant regulations and standards. Ability to lead audits (ISO27001, SOC1, SOC2) across products Set and communicate team/individual objectives and KPI to inspire individuals to achieve high performance, may include defining cross team objectives too. Mentor and guide teammates with best practices of DevOps Recognize team and individuals and prioritizes building a culture of valuing people. Achievements/Goals*per JD Template (please see above link on SharePoint) By 2024 Dec - Automated application deployments and upgrades to AWS and GCP By 2024 Dec - Automated infrastructure deployments to GCP Overall DevOps ownership of Exago, Composer and Logi products - CICD, Compliance, security, Biz continuity Qualifications 15+ yrs of Strong experience with DevOps technologies, cloud-based provisioning, monitoring, and troubleshooting in AWS\Azure\GCP and Linux\Windows OS flavor.Ability to manage large Cloud infrastructure operations including Inventory and cloud cost management. Experience mentoring, coaching, and/or leading a team and able to communicate effectively with all levels of the organization. Excellent project management, communication and interpersonal skills Deep understanding of AWS, Azure or other cloud providers (e.g. GCP), Experience with RDBMS like MySQL, Postgres Understanding of key security technologies and protocols such as TLS, OAuth. Experience using IaC frameworks such as Terraform/Ansible/Pulumi Experience in working with Azure Devops\Jenkins Experience in and demonstrated understanding of source control management concepts such as branching, merging, and integration primarily in git or can take parallels from bitbucket, github Experience with network fundamentals such as TCP/IP, DNS, DHCP, Routing, Transit gateway, Direct Connect. Expertise in designing HA, DR, Backup and retention architectures on cloud platforms. Experience with security fundamentals such as Endpoint, Vulnerability management, Firewall, WAF, Control tower, Guardduty, Advisor, CSPM, ISO, CIS, SOX, HIPPA Expertise in preparing technical documentation. Experience in creating diagrams, solution proposal, solution design and presenting the solutions to larger audience. Expert level experience in Containerization and orchestration technologies like Docker & Kubernetes Development experience in analytics and monitoring, observability solutions like primarily DataDog or can take parallels from ELK Stack, Prometheus & Grafana for Monitoring, Metrics, Logging, Alerts & Tracing Knowledge in scripting in Python, Shell, Bash or Groovy Good to have - Experience with deployment and support of queuing infrastructure. (e.g. Kafka, AMQP, Rabbit MQ) Additional Information ** At this time insightsoftware is not able to offer sponsorship to candidates who are not eligible to work in the country where the position is located . ** Background checks are required for employment with insightsoftware, where permitted by country, state/province.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2