Jobs
Interviews

5 Cloud Dns Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

You should have 6-10 years of experience in development, specifically in Java/J2EE, with a strong knowledge of core Java. Additionally, you must be proficient in Spring frameworks, particularly in Spring MVC, Spring Boot, and JPA + Hibernate. It is essential to have hands-on experience with Microservice technology, including development of RESTFUL and SOAP Web Services. A good understanding of Oracle DB is required. Your communication skills, especially when interacting with clients, should be excellent. Experience in building tools like Maven, deployment, and troubleshooting issues is necessary. Knowledge of CI/CD tools such as Jenkins and experience with GIT or similar source control tools is expected. You should also be familiar with Agile/Scrum software development methodologies using tools like Jira, Confluence, and BitBucket and have performed Requirement Analysis. It would be beneficial to have knowledge of frontend stacks like React or Angular, as well as frontend and backend API integration. Experience with AWS, CI/CD best practices, and designing security reference architectures for AWS Infrastructure Applications is advantageous. You should possess good verbal and written communication skills, the ability to multitask in a fast-paced environment, and be highly organized and detail-oriented. Awareness of common information security principles and practices is required. TELUS International is committed to creating a diverse and inclusive workplace and is an equal opportunity employer. All employment decisions are based on qualifications, merits, competence, and performance without regard to any characteristic related to diversity.,

Posted 4 days ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Data Engineer (ETL, Big Data, Hadoop, Spark, GCP) at Assistant Vice President level, located in Pune, India, you will be responsible for developing and delivering engineering solutions to achieve business objectives. You are expected to have a strong understanding of crucial engineering principles within the bank, and be skilled in root cause analysis through addressing enhancements and fixes in product reliability and resiliency. Working independently on medium to large projects with strict deadlines, you will collaborate in a cross-application technical environment, demonstrating a solid hands-on development track record within an agile methodology. Furthermore, this role involves collaborating with a globally dispersed team and is integral to the development of the Compliance tech internal team in India, delivering enhancements in compliance tech capabilities to meet regulatory commitments. Your key responsibilities will include analyzing data sets, designing and coding stable and scalable data ingestion workflows, integrating them with existing workflows, and developing analytics algorithms on ingested data. You will also be working on data sourcing in Hadoop and GCP, owning unit testing, UAT deployment, end-user sign-off, and production go-live. Root cause analysis skills will be essential for identifying bugs and issues, and supporting production support and release management teams. You will operate in an agile scrum team and ensure that new code is thoroughly tested at both unit and system levels. To excel in this role, you should have over 10 years of coding experience with reputable organizations, hands-on experience in Bitbucket and CI/CD pipelines, and proficiency in Hadoop, Python, Spark, SQL, Unix, and Hive. A basic understanding of on-prem and GCP data security, as well as hands-on development experience with large ETL/big data systems (with GCP experience being a plus), are required. Familiarity with cloud services such as cloud build, artifact registry, cloud DNS, and cloud load balancing, along with data flow, cloud composer, cloud storage, and data proc, is essential. Additionally, knowledge of data quality dimensions and data visualization is beneficial. You will receive comprehensive support, including training and development opportunities, coaching from experts in your team, and a culture of continuous learning to facilitate your career progression. The company fosters a collaborative and inclusive work environment, empowering employees to excel together every day. As part of Deutsche Bank Group, we encourage applications from all individuals and promote a positive and fair workplace culture. For further details about our company and teams, please visit our website: https://www.db.com/company/company.htm.,

Posted 3 weeks ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Noida

Work from Office

Must have GCP Solution Architect Certification & GKE Mandatory Skills: Technical Qualification/ Knowledge: Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc.. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment , business case creation , design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform Compute Engine , Compute Engine Managed Instance Groups , Kubernetes Cloud Storage , Cloud Storage for Firebase , Persistant Disk , Local SSD , Filestore , Transfer Service Virtual Private Network (VPC), Cloud DNS , Cloud Interconnect , Cloud VPN Gateway , Network Load Balancing , Global load balancing , Firewall rules , Cloud Armor Cloud IAM , Resource Manager , Multi-factor Authentication , Cloud KMS Cloud Billing , Cloud Console , Stackdriver Cloud SQL, Cloud Spanner SQL, Cloud Bigtable Cloud Run Container services, Kubernetes Engine (GKE) , Anthos Service Mesh , Cloud Functions , PowerShell on GCP Solid understanding and experience in cloud computing based services architecture, technical design and implementations including IaaS, PaaS, and SaaS. Design of clients Cloud environments with a focus on mainly on GCP and demonstrate Technical Cloud Architectural knowledge. Playing a vital role in the design of production, staging, QA and development Cloud Infrastructures running in 24x7 environments. Delivery of customer Cloud Strategies, aligned with customers business objectives and with a focus on Cloud Migrations and DR strategies Nurture Cloud computing expertise internally and externally to drive Cloud Adoption Should have a deep understanding of IaaS and PaaS services offered on cloud platforms and understand how to use them together to build complex solutions. Ensure that all cloud solutions follow security and compliance controls, including data sovereignty. Deliver cloud platform architecture documents detailing the vision for how GCP infrastructure and platform services support the overall application architecture, interaction with application, database and testing teams for providing a holistic view to the customer. Collaborate with application architects and DevOps to modernize infrastructure as a service (IaaS) applications to Platform as a Service (PaaS) Create solutions that support a DevOps approach for delivery and operations of services Interact with and advise business representatives of the application regarding functional and non-functional requirements Create proof-of-concepts to demonstrate viability of solutions under consideration Develop enterprise level conceptual solutions and sponsor consensus/approval for global applications. Have a working knowledge of other architecture disciplines including application, database, infrastructure, and enterprise architecture. Identify and implement best practices, tools and standards Provide consultative support to the DevOps team for production incidents Drive and support system reliability, availability, scale, and performance activities Evangelizes cloud automation and be a thought leader and expert defining standards for building and maintaining cloud platforms. Knowledgeable about Configuration management such as Chef/Puppet/Ansible. Automation skills using CLI scripting in any language (bash, perl, python, ruby, etc) Ability to develop a robust design to meet customer business requirement with scalability, availability, performance and cost effectiveness using GCP offerings Ability to identify and gather requirements to define an architectural solution which can be successfully built and operate on GCP Ability to conclude high level and low level design for the GCP platform which may also include data center design as necessary Capabilities to provide GCP operations and deployment guidance and best practices throughout the lifecycle of a project Understanding the significance of the different metrics for monitoring, their threshold values and should be able to take necessary corrective measures based on the thresholds Knowledge on automation to reduce the number of incidents or the repetitive incidents are preferred Good knowledge on the cloud center operation, monitoring tools, backup solution GKE Set up monitoring and logging to troubleshoot a cluster, or debug a containerized application. Manage Kubernetes Objects Declarative and imperative paradigms for interacting with the Kubernetes API. Managing Secrets Managing confidential settings data using Secrets. Configure load balancing, port forwarding, or setup firewall or DNS configurations to access applications in a cluster. Configure networking for your cluster. Hands-on experience with terraform. Ability to write reusable terraform modules. Hands-on Python and Unix shell scripting is required. understanding of CI/CD Pipelines in a globally distributed environment using Git, Artifactory, Jenkins, Docker registry. Experience with GCP Services and writing cloud functions. Hands-on experience deploying and managing Kubernetes infrastructure with Terraform Enterprise. Ability to write reusable terraform modules. Certified Kubernetes Administrator (CKA) and/or Certified Kubernetes Application Developer (CKAD) is a plus Experience using Docker within container orchestration platforms such as GKE. Knowledge of setting up splunk Knowledge of Spark in GKE Certification: GCP solution architect & GKE Process/ Quality Knowledge: Must have clear knowledge on ITIL based service delivery ITIL certification is desired Knowledge on quality Knowledge on security processes

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion its a place where you can grow, belong and thriv Key Responsibilities: Minimum 5 years of experience in managing VMware/Hyper-V environment Strong Knowledge of VSAN,NSX, vROPS Automation Infrastructure. Good hands-on experience on installation and configuration of servers/VMs and infrastructure services (DNS/DHCP/File service/ AD etc.) Knowledge of client/server and virtual technologies Proficient on network principles and protocols such as IP subnetting, routing, firewall rules, Virtual Private Cloud, Load Balancer, Cloud DNS, Cloud CDN, Knowledge of monitoring tools and reporting. Strong knowledge on server infrastructure, virtualization (VMware), cloud computing Experience working with some or all technologies below: Windows Server 2012/2016 AD/DNS/DHCP/ADFS WSUS/SCCM Windows Cluster VMware Hyper-V Enterprise Backup and Storage. Maintain equipment based on SOP's and ensure immediate escalation Storage Knowledge creation lun , deletion , share folder creation and permissions File server knowledge Change & Operation Project Management Design Documentation and Reporting Getting resolution from Third parties, Vendors and suppliers for fault Effective handling of Technical Escalations Ensure Service Delivery as per agreed SOW SLA Troubleshoot High Severity incidents, problem & provide proper RFO/ Root cause analysis and Preventive maintenance Participate in Change management, assess Risk and Impact Mentor & Hand hold L1/L2 Teams and enhance their performance MIS customer report preparation on technology part Ability to design and manage cloud-based infrastructures to deliver the required performance, security and availability requirements. Ability to understand Migration requirements and bridge the gaps Expertise in architecture blueprints and detailed documentation. Planning of the project together with other developers Development of the application structure Creating a cloud product Carrying out disaster recovery Applying development optimization tools Automation of work processes, such as deployment Creation of various product features Implementing modifications and updates after the release Excellent knowledge of AWS/Azure tools The ability to conduct testing at different levels and stages of the project Knowledge of scripting languages Knowledge of optimization, automation, integration, and productivity tools Experience designing and building web environments on AWS, which includes working with services like EC2, ELB, RDS, S3, Lambda, Security Group and auto scaling A solid background in Linux and Windows server system administration Experience using monitoring solutions like CloudWatch, ELK Stack, and Prometheus Designing and deploying enterprise-wide scalable operations on Cloud Platforms Implementing cost-control strategies Identify, analyze, and resolve infrastructure vulnerabilities and application deployment issues Regularly review existing systems and make recommendations for improvements Interact with clients, provide cloud support, and make recommendations based on client needs Academic Qualifications and Certifications: Bachelor's degree or equivalent qualification in IT/Computing (or demonstrated equivalent work experience). Certifications relevant to the services provided (certifications carry additional weightage on a candidates qualification for the role).

Posted 1 month ago

Apply

7 - 12 years

15 - 20 Lacs

Navi Mumbai, Bengaluru, Mumbai (All Areas)

Work from Office

Key Responsibilities: Design, implement, and maintain GCP cloud infrastructure using Infrastructure as Code (IaC) tools Manage and optimize Kubernetes clusters on GKE (Google Kubernetes Engine) Build and maintain CI/CD pipelines for efficient application delivery Monitor GCP infrastructure costs and drive optimization strategies Develop observability solutions using GCP-native and third-party tools Collaborate with engineering teams to streamline deployment and operations workflows Enforce security best practices and ensure compliance with internal and industry standards Design and implement high availability (HA) and disaster recovery (DR) architectures Mandatory Technical Skills: GCP Services: Compute Engine, VPC, Cloud Storage, Cloud SQL, IAM, Cloud DNS, Cloud Monitoring Infrastructure as Code: Terraform (preferred), Deployment Manager Containerization: Docker, Kubernetes (GKE expertise required) CI/CD Tools: GitHub Actions, Cloud Build, Jenkins, or similar Version Control: Git Scripting Languages: Python, Bash Monitoring & Logging: Stackdriver, Prometheus, Grafana, ELK Stack Strong experience with automation and configuration management (Terraform, Ansible, etc.) Solid understanding of cloud security best practices Experience designing fault-tolerant, resilient cloud-native architectures 47 years in DevOps/Cloud Engineering roles Minimum 2+ years hands-on with GCP infrastructure and services Proven experience managing CI/CD pipelines and container-based deployments Strong background in modern DevOps tools and cloud-native architectures Preferred candidate profile

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies