Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 8 years
5 - 10 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute Services Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities: Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations. Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes. Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus Proven track record in supporting and deploying various public cloud services. Experience in building or managing self-service platforms to boost developer productivity. Proficiency in using Infrastructure as Code (IaC) tools like Terraform. Skilled in diagnosing and resolving complex issues in automation and cloud environments. Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems. Strong understanding of infrastructure CI/CD pipelines and associated tools. Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions. Experience working in GKE, Edge/GDCE environments. Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset: Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions. At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies. Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules. Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE). Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications: GCP ACE certification is mandatory. CKA certification is highly desirable. HashiCorp Terraform certification is a significant plus.
Posted 2 months ago
3 - 8 years
3 - 7 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute Services Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities: Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations. Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes. Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus Proven track record in supporting and deploying various public cloud services. Experience in building or managing self-service platforms to boost developer productivity. Proficiency in using Infrastructure as Code (IaC) tools like Terraform. Skilled in diagnosing and resolving complex issues in automation and cloud environments. Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems. Strong understanding of infrastructure CI/CD pipelines and associated tools. Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions. Experience working in GKE, Edge/GDCE environments. Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset: Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions. At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies. Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules. Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE). Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications: GCP ACE certification is mandatory. CKA certification is highly desirable. HashiCorp Terraform certification is a significant plus.
Posted 2 months ago
4 - 9 years
8 - 18 Lacs
Pune, Bengaluru, Hyderabad
Hybrid
Role & responsibilities Knowledge in Product Safety WWI modifications for SDS and Labels Verisk / 3E updates GLM Substance Volume Tracking Recipe Development Preferred candidate profile Perks and benefits
Posted 2 months ago
3 - 8 years
3 - 7 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute Services Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities: Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations. Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes. Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus Proven track record in supporting and deploying various public cloud services. Experience in building or managing self-service platforms to boost developer productivity. Proficiency in using Infrastructure as Code (IaC) tools like Terraform. Skilled in diagnosing and resolving complex issues in automation and cloud environments. Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems. Strong understanding of infrastructure CI/CD pipelines and associated tools. Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions. Experience working in GKE, Edge/GDCE environments. Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset: Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions. At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies. Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules. Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE). Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications: GCP ACE certification is mandatory. CKA certification is highly desirable. HashiCorp Terraform certification is a significant plus.
Posted 2 months ago
12 - 17 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Solution Architecture Good to have skills : Microsoft Azure Architecture Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication within the team and stakeholders. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Lead the effort to design, build, and configure applications Act as the primary point of contact Ensure seamless communication within the team and stakeholders Provide guidance and support to team members-Architectural Leadership: Serve as the Azure Solution Architect with specific sector knowledge, setting the direction for cloud architecture and ensuring alignment with the organization's technical strategy and O&G industry standards. Uphold industry best practices and standards specific to the O&G sector.Technology Roadmap: Construct and continuously update an Azure-focused technology roadmap, aligning with the organization's long-term goals. Explore and identify cutting-edge Azure services and features that can propel technological advancement. Strategically plan and implement upgrades to bolster the organization's competitive position and enhance the scalability of Azure-based solutions.Solution Design: Take the lead in design and architecture complex Azure solutions, with a strong focus on ensuring scalability, robust security, and cost-effectiveness and alignment with nuanced demands of the O&G industry.Stakeholder Engagement: Work in tandem with various service lines, such as engineering divisions and business stakeholders, to align Azure architectural strategies with the core business objectives and ensure the designs are in sync with the business's forward direction. Possess the ability to effectively communicate Azure technical strategies to non-technical stakeholders, thereby facilitating their participation in informed decision-making.Mentorship and Guidance: Offer Azure technical leadership and mentorship to solution squads. Cultivate an environment of innovation, continuous improvement, and technical prowess across the organization.Compliance and Best Practices: Guarantee Azure solutions meet regulatory demands and O&G-specific standards, including those related to safety, environment, and operations.Risk Assessment: Proactively identify and assess technical risks linked with Azure infrastructure and applications. Collaborate with multifaceted teams to formulate and implement measures to alleviate the detected risks. As a Solution Architect, it is crucial to pinpoint potential risks during the solution development phase and devise comprehensive risk mitigation plans for all solutions crafted.Industry Expertise: Stay informed about emerging technologies, trends, and standards in the oil and gas industry. Evaluate the potential impact of new technologies and provide recommendations for adoption both in upcoming solution designs as well as enhancement of solution architectures.Vendor Management: Engage with external vendors and technology associates to scrutinize third-party offerings compatible with Azure. Integrate these third-party solutions effortlessly, ensuring they complement and reinforce the broader Azure architectural strategy and the business objectives.c Professional & Technical Skills: Must To Have Skills: Proficiency in Solution Architecture, Microsoft Azure Architecture Strong understanding of cloud computing principles Experience in designing scalable and secure applications Knowledge of software development lifecycle methodologies Ability to analyze complex technical requirements and provide innovative solutions-Must have Master's or Bachelor's degree in computer science engineering or information technology or Relevant field. Relevant certifications such as Microsoft Certified:Azure Solutions Architect Expert or similar. Microsoft AZ900 Certification & AZ 305 Certification TOGAF or ArchiMate or Zachman or equivalent architecture frameworks experience Experience in automation using Python, Gen AI, AI Ops, etc. Experience with data integration, data warehousing, and big data technologies. Experience with containerization and orchestration tools (e.g., any 2 of following:Docker, OpenShift, Kubernetes, ECS, GKE, AKS, EKS, Rancher, Apache Mesos, Nomad, Docker Swarm, Kubernetes). Understanding of the O&G sector's operational workflows, including the intricacies of exploration, extraction, refining, and distribution activities, to tailor cloud-based solutions that complement the industry's unique needs. Competence in tackling technical hurdles specific to the O&G domain, such as efficient asset management in isolated areas, processing extensive seismic datasets, and ensuring compliance with strict regulatory frameworks. Proficiency in leveraging Azure cloud technologies to enhance the O&G Industry's operational effectiveness, utilizing tools like IoT, advanced data analytics, and machine learning for better results Additional Information: The candidate should have a minimum of 12 years of experience in Solution Architecture This position is based at our Bengaluru office A 15 years full-time education is required
Posted 2 months ago
3 - 7 years
13 - 18 Lacs
Pune
Work from Office
About The Role : Job Title Technical-Specialist Big Data (PySpark) Developer LocationPune, India Role Description This role is for Engineer who is responsible for design, development, and unit testing software applications. The candidate is expected to ensure good quality, maintainable, scalable, and high performing software applications getting delivered to users in an Agile development environment. Candidate / Applicant should be coming from a strong technological background. The candidate should have goo working experience in Python and Spark technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should be able to technically guide and mentor junior resources in the team. As a developer you will bring extensive design and development skills to enforce the group of developers within the team. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy. Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design and discuss your own solution for addressing user stories and tasks. Develop and unit-test, Integrate, deploy, maintain, and improve software. Perform peer code review. Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate / applicant provides. Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master. Your skills and experience Engineer with Good development experience in Big Data platform for at least 5 years. Hands own experience in Spark (Hive, Impala). Hands own experience in Python Programming language. Preferably, experience in BigQuery , Dataproc , Composer , Terraform , GKE , Cloud SQL and Cloud functions. Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of DevOps. Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platformsOpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs.
Posted 2 months ago
5 - 6 years
9 - 10 Lacs
Bengaluru
Work from Office
requires technical proficiency, problem-solving abilities, and a deep understanding of EHS processes in SAP to support and enhance our compliance,safety, and regulatory systems. EHS Consultant/Specialist with strong hands-on experience in SAP EHS.
Posted 2 months ago
3 - 5 years
4 - 8 Lacs
Pune
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google Cloud Data Services Good to have skills : GCP Dataflow, Data Engineering Minimum 3 year(s) of experience is required Educational Qualification : standard 15 years Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems using Google Cloud Data Services. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing using Google Cloud Data Services. Create data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. Collaborate with cross-functional teams to identify and resolve data-related issues and ensure data accuracy and consistency. Develop and maintain data models, data dictionaries, and data flow diagrams to support data integration and data management processes. JOB SUMMARY & PRINCIPAL DUTIES: A solid experience and understanding of considerations for large-scale solutioning and operationalization of data warehouses, data lakes and analytics platforms on GCP is a must. Monitors the Data Lake constantly and ensures that the appropriate support teams are engaged at the right times. Design, build and test scalable data ingestion pipelines, perform end to end automation of ETL process for various datasets that are being ingested. Determine best way to extract application telemetry data, structure it, send to proper tool for reporting (Kafka, Splunk). Create reports to monitor usage data for billing and SLA tracking. Work with business and cross-functional teams to gather and document requirements to meet business needs. Provide support as required to ensure the availability and performance of ETL/ELT jobs. Provide technical assistance and cross training to business and internal team members. Collaborate with business partners for continuous improvement opportunities. Must have experience working in an Onshore/Offshore model. Be responsible for removing any blockers to tasks, communicating with the client stakeholders to manage delivery risksRequirements JOB SPECIFICATIONS:Education:Bachelor's Degree in Computer Science, Information Technology, Engineering, or related fieldExperience, Skills & Qualifications: 3+ years of experience in Data Engineering with an emphasis on Data Warehousing and Data Analytics. 3+ years of experience with one of the leading public clouds. 2+ years of experience in design and build of salable data pipelines that deal with extraction, transformation, and loading. Mandatory Experience 3+ years of experience with Python with working knowledge on Notebooks. Mandatory - 3+ years working on a cloud data projects Nice to Have Scala experience. Must have for Onshore candidate- 2+ years hands on experience on GCP Cloud data implementation projects (Dataflow, DataProc, Cloud Composer, Big Query, Cloud Storage, GKE, Airflow, etc.). At least 2 years of experience in Data governance and Metadata Management. Ability to work independently, solve problems, update the stake holders. Analyze, design, develop and deploy solutions as per business requirements. Strong understanding of relational and dimensional data modeling. Experience in DevOps and CI/CD related technologies. Excellent written, verbal communication skills, including experience in technical documentation and ability to communicate with business managers and executives. Additional Information: The candidate should have a minimum of 3 years of experience in Google Cloud Data Services. The ideal candidate will possess a strong educational background in computer science, information technology, or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Pune office. Qualifications standard 15 years
Posted 2 months ago
5 - 10 years
7 - 12 Lacs
Hyderabad
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Linux Architecture Good to have skills : Linux Operations, Linux Containers Administration Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Support Engineer, you will act as software detectives, providing a dynamic service identifying and solving issues within multiple components of critical business systems. You will play a crucial role in ensuring the smooth functioning of the applications and resolving any technical issues that may arise. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Ensure the smooth functioning of critical business systems. Identify and resolve technical issues within multiple components of the applications. Collaborate with cross-functional teams to troubleshoot and resolve issues. Conduct root cause analysis and implement preventive measures to avoid future issues. Professional & Technical Skills: Must To Have Skills:Proficiency in Linux Architecture, Linux Operations, Linux Containers Administration, Google Cloud Platform Administration, Terraform, Kubernetes / GKE. Good To Have Skills:Experience with Linux Operations, Linux Containers Administration, Google Cloud Platform Administration, Terraform, Kubernetes / GKE. Experience implementing, troubleshooting, and supporting Unix/Linux operating systems. Experience with mixed operating systems (e.g. Linux, Red Hat, Windows, Mac OS X, etc). Administer Cloud Based Compute Infrastructure, resource management, and orchestration. Administer Linux and Windows systems, including configuration, troubleshooting, and automation. Knowledge of TCP/IP networking, webservers and LAN/WAN environments & OS internals (file systems, storage, process lifecycle, etc). Experience implementing, troubleshooting, and supporting Cloud Infrastructure including containerization, Docker and Kubernetes. Scripting experience in actively writing/modifying code to improve monitoring and automation (Perl, Python, Shell, etc). Participate in and improve the lifecycle of services from inception and design, through deployment, operation, and refinement. Maintain services by measuring and monitoring availability, latency, and overall system health. Skills required:Terraform, Kubernetes / GKE IaC / configuration management tools, such as Ansible, Saltstack, Puppet DevOps pipeline tools, such as Harness, github, gitlab, jenkins, capistrano Google Prod tools, such as borg, boq, pod, rapid Coding experience with scripting languages like Golang or Python Other specific GCP-related cloud offerings:CloudSQL, CloudRun, GKE, loadbalancing, Cloud Monitoring, Cloud Logging. Additional Information: The candidate should have a minimum of 5 years of experience in Linux administration. This position is based at our Hyderabad office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 2 months ago
12 - 17 years
14 - 19 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Solution Architecture Good to have skills : Microsoft Azure Architecture Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication within the team and stakeholders. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Lead the effort to design, build, and configure applications Act as the primary point of contact Ensure seamless communication within the team and stakeholders Provide guidance and support to team members-Architectural Leadership: Serve as the Azure Solution Architect with specific sector knowledge, setting the direction for cloud architecture and ensuring alignment with the organization's technical strategy and O&G industry standards. Uphold industry best practices and standards specific to the O&G sector.Technology Roadmap: Construct and continuously update an Azure-focused technology roadmap, aligning with the organization's long-term goals. Explore and identify cutting-edge Azure services and features that can propel technological advancement. Strategically plan and implement upgrades to bolster the organization's competitive position and enhance the scalability of Azure-based solutions.Solution Design: Take the lead in design and architecture complex Azure solutions, with a strong focus on ensuring scalability, robust security, and cost-effectiveness and alignment with nuanced demands of the O&G industry.Stakeholder Engagement: Work in tandem with various service lines, such as engineering divisions and business stakeholders, to align Azure architectural strategies with the core business objectives and ensure the designs are in sync with the business's forward direction. Possess the ability to effectively communicate Azure technical strategies to non-technical stakeholders, thereby facilitating their participation in informed decision-making.Mentorship and Guidance: Offer Azure technical leadership and mentorship to solution squads. Cultivate an environment of innovation, continuous improvement, and technical prowess across the organization.Compliance and Best Practices: Guarantee Azure solutions meet regulatory demands and O&G-specific standards, including those related to safety, environment, and operations.Risk Assessment: Proactively identify and assess technical risks linked with Azure infrastructure and applications. Collaborate with multifaceted teams to formulate and implement measures to alleviate the detected risks. As a Solution Architect, it is crucial to pinpoint potential risks during the solution development phase and devise comprehensive risk mitigation plans for all solutions crafted.Industry Expertise: Stay informed about emerging technologies, trends, and standards in the oil and gas industry. Evaluate the potential impact of new technologies and provide recommendations for adoption both in upcoming solution designs as well as enhancement of solution architectures.Vendor Management: Engage with external vendors and technology associates to scrutinize third-party offerings compatible with Azure. Integrate these third-party solutions effortlessly, ensuring they complement and reinforce the broader Azure architectural strategy and the business objectives.c Professional & Technical Skills: Must To Have Skills:Proficiency in Solution Architecture, Microsoft Azure Architecture Strong understanding of cloud computing principles Experience in designing scalable and secure applications Knowledge of software development lifecycle methodologies Ability to analyze complex technical requirements and provide innovative solutions-Must have Master's or Bachelor's degree in computer science engineering or information technology or Relevant field. Relevant certifications such as Microsoft Certified:Azure Solutions Architect Expert or similar. Microsoft AZ900 Certification & AZ 305 Certification TOGAF or ArchiMate or Zachman or equivalent architecture frameworks experience Experience in automation using Python, Gen AI, AI Ops, etc. Experience with data integration, data warehousing, and big data technologies. Experience with containerization and orchestration tools (e.g., any 2 of following:Docker, OpenShift, Kubernetes, ECS, GKE, AKS, EKS, Rancher, Apache Mesos, Nomad, Docker Swarm, Kubernetes). Understanding of the O&G sector's operational workflows, including the intricacies of exploration, extraction, refining, and distribution activities, to tailor cloud-based solutions that complement the industry's unique needs. Competence in tackling technical hurdles specific to the O&G domain, such as efficient asset management in isolated areas, processing extensive seismic datasets, and ensuring compliance with strict regulatory frameworks. Proficiency in leveraging Azure cloud technologies to enhance the O&G Industry's operational effectiveness, utilizing tools like IoT, advanced data analytics, and machine learning for better results Additional Information: The candidate should have a minimum of 12 years of experience in Solution Architecture This position is based at our Bengaluru office A 15 years full-time education is required Qualifications 15 years full time education
Posted 2 months ago
5 - 7 years
1000 Lacs
Bengaluru
Work from Office
Overview A Data Engineer will be responsible for understanding the client's technical requirements, design and build data pipelines to support the requirements. In this role, the Data Engineer, besides developing the solution, will also oversee other Engineers' development. This role requires strong verbal and written communication skills and effectively communicate with the client and internal team. A strong understanding of databases, SQL, cloud technologies, and modern data integration and orchestration tools like GCP Dataflow, GKE, Workflow, Cloud Build, and Airflow are required to succeed in this role. Responsibilities • Play a critical role in the design and implementation of data platforms for the AI products • Develop productized and parameterized data pipelines that feed AI products. • Develop efficient data transformation code in spark (in Python and Scala) and Dask. • Develop efficient data transformation code in ApacheBeam-DataFlow (in Java and Python). • Develop efficient microservice code in SpringBoot (in GKE and Java). • Build workflows to automate data pipeline using python and Argo, Cloud Build. • Develop data validation tests to assess the quality of the input data. • Conduct performance testing and profiling of the code using a variety of tools and techniques. • Guide Data Engineers in delivery teams to follow the best practices in deploying the data pipeline workflows. • Build data pipeline frameworks to automate high-volume and real-time data delivery for our data hub • Operationalize scalable data pipelines to support data science and advanced analytics • Optimize customer data science workloads and manage cloud services costs/utilization • Developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies Qualifications • Minimum Education: o Bachelors, Master's or Ph.D. Degree in Computer Science or Engineering. • Minimum Work Experience (years): o 5+ years of experience programming with at least one of the following languages: Python, Scala, Go. o 5+ years of experience in SQL and data transformation o 5+ years of experience in developing distributed systems using open-source technologies such as Spark and Dask. o 5+ years of experience with relational databases or NoSQL databases running in Linux environments (MySQL, MariaDB, PostgreSQL, MongoDB, Redis). • Key Skills and Competencies: o Experience working with AWS / Azure / GCP environment is highly desired. o Experience in data models in the Retail and Consumer products industry is desired. o Experience working on agile projects and understanding of agile concepts is desired. o Demonstrated ability to learn new technologies quickly and independently. o Excellent verbal and written communication skills, especially in technical communications. o Ability to work and achieve stretch goals in a very innovative and fast-paced environment. o Ability to work collaboratively in a diverse team environment. o Ability to telework. o Expected travel: Not expected.
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Karnataka
Work from Office
Description Skill-set ReST/API Concept and Principles Gateways API Security NodeJs MicroServices DevOps IaC (Terraform) CI/CD (Git Jenkins/Bamboo/CodeFresh) Docker Splunk/DynaTrace Cloud AWS/GCP EKS/GKE Communication Problem Solving Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family 60236 (P) Software Engineering Local Role Name 6504 Developer / Software Engineer Local Skills 37105 Node JS Languages RequiredEnglish Role Rarity To Be Defined
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Karnataka
Work from Office
Description Cloud Engineer - GCP Strong expertise in cloud platforms engineering on GCP Strong experience with Service Orientated Architecture Cloud & Kubernetes Knowledge of one or more programming language such as Go Python or JavaScript Experience working with infrastructure as code config as code tooling and methodologies (i.e. Terraform etc) Experience with different Continuous Integration (CI/CD) tooling used at ANZ such as CodeFresh and Google Cloud Build Experience with different Monitoring and APM tools used at ANZ such as Prometheus Grafana Splunk and Dynatrace Experience or knowledge of Site Reliability Engineering practices. Experience building self-service alerting functionality on top of different Monitoring and APM tools (Slack/ServiceNow etc) Ability to work with a DevOps mindset Senior / Lead GCP Platform Engineer - Skills Breakdown Large Organisation Experience oExperience using multi-project organisational structures. Strong knowledge of GCP services including but not limited to oHands-on GCP networking skills (e.g. Shared Virtual Private Cloud (VPC) subnetworks Firewall Rules Cloud Router Cloud DNS Load Balancing Interconnect etc.). oThorough understanding of networking concepts especially TCP/IP IP addressing and subnet calculation. oSolid experience with GCP Security services; Identity and Access Management (IAM) Cloud Identity-Aware Proxy (IAP) Key Management Service (KMS) Cloud Security Command Center Secrets Manager Resource Manager etc. oGood knowledge of various GCP Integration patterns Cloud Functions with Cloud Pub/Sub Cloud Storage and Cloud SQL. oAny workload-related experience is a bonus e.g. Kubernetes Engine Google Compute Engine App Engine etc. oContainerization experience with Docker and GKE (preferred) Infrastructure as a Code and Scripting oSolid hands-on experience with declarative languages Google Cloud Deployment Manager (& Terraform preferred) and their capabilities oComfortable with Bash scripting and at least one programming language (Python or Go preferred). oSound knowledge of secure coding practices and configuration/secrets management oKnowledge in writing unit and integration tests. oExperience in writing infrastructure unit tests; Terratest preferred Solid understanding of CI/CD oSolid understanding of zero-downtime deployment patterns oExperience with automated continuous integration testing including security testing using SAST tools oExperience in automated CI/CD pipeline tooling; oCloud Build preferred Experience in creating runners Docker images Experience using version control systems such as Git oExposed to and comfortable working on large source code repositories in a team environment. oSolid expertise with Git and Git workflows working within mid to large (infra) product development teams General / Infrastructure Experience oExperience with cloud ops (DNS Backups cost optimization capacity management monitoring/alerting patch management etc.) oExposure to complex application environments including containerized as well as serverless applications oWindows and/or Linux systems administration experience (preferred) oExperience with Active Directory (preferred) oExposure to multi-cloud and hybrid infrastructure oExposure to large-scale on-premise to cloud infrastructure migrations oSolid experience in working with mission-critical production systems Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family 60236 (P) Software Engineering Local Role Name 6504 Developer / Software Engineer Local Skills 58565 Cloud Languages RequiredEnglish Role Rarity To Be Defined
Posted 2 months ago
5 - 10 years
7 - 12 Lacs
Pune
Work from Office
Project Role :Data Engineer Project Role Description :Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills :Google Cloud Data Services, Python (Programming Language), GCP Dataflow, Apache Airflow Good to have skills :NA Minimum 5 year(s) of experience is required Educational Qualification :15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will be responsible for designing and implementing data solutions that meet the needs of the organization and contribute to its overall success. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Design and develop data pipelines to extract, transform, and load data. Ensure data quality and integrity throughout the data processing lifecycle. Implement ETL processes to migrate and deploy data across systems. Collaborate with cross-functional teams to understand data requirements and design appropriate solutions. Professional & Technical Skills: Must Have Skills:Strong Proficiency in Python (Programming Language), Apache Airflow and Google Cloud Data Services. Must Have Skills:5+ years' experience in python programming experience in complex data structures as well as data pipeline development. Must Have Skills:5+ years' experience in python libraires Airflow, Pandas, Py Spark, Redis, SQL (or similar libraries). Must Have Skills:3+ Strong SQL programming experience in Mid/Advance functions like Aggregate Functions (SUM, AVG), Conditional Functions (CASE WHEN, NULLIF), Mathematical Functions (ROUND, ABS), Ranking Functions (RANK) and Windowing Functions etc. Nice to Have Skills:3+ years' experience in Dataflow, dataproc, Cloud Composer, Big Query, Cloud Storage, GKE, etc. Nice to Have Skills:Alternative to Google Cloud, data pipeline development using python for 3+ years for any other cloud platform can be considered. Strong of experience with one of the leading public clouds. Strong of experience in design and build of salable data pipelines that deal with extraction, transformation, and loading. Strong experience in Data Engineering with an emphasis on Data Warehousing and Data Analytics. Mandatory Experience:years of experience with Python with working knowledge on Notebooks. Mandatory - years working on a cloud data projects Additional Information: The candidate should have a minimum of 5 years of experience in Google Cloud Data Services. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 3 months ago
5 - 10 years
15 - 25 Lacs
Pune
Work from Office
Devops + GCP 5+ Years Location - Pune Essential Experience Proficient with CICD toolchains (e.g. Azure DevOps, Jenkins, Git, Artefactory etc.) Proficient in one or more scripting languages for automation (e.g. Linux Bash, PowerShell, Python) Proficient in provisioning platforms via Infrastructure-as-Code (IaC) techniques (e.g. Terraform, YAML, Azure Resource Manager (ARM)) Working experience configuring, securing and administering platforms in Azure; knowledge in Cloud infrastructure and networking principles (e.g. Azure PaaS, IaaS) Demonstrable knowledge of working with distributed data platforms (e.g. Azure ADLS, Data Lakes) Experience working with vulnerability management and code-inspection tooling (e.g. Snyk, SonarQube) Possess an automation-firstmindset when building solutions; considerations for self-healing and fault-tolerant methods to minimize manual intervention and downtime
Posted 3 months ago
5 - 9 years
7 - 11 Lacs
Bengaluru
Work from Office
About The Role :: Job Title- Engineer - Java Location - Bangalore Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key responsibilities Responsible for Designing, building, implementing and maintaining software applications using both Java and related technologies. Should have proficiency in Java, Spring boot and related tech stack as well as strong problem-solving skills and the ability to work in agile development environment. Responsible for development of microservices in Java, integrating APIs and collaborating with team members to deliver high quality software solutions. Your Skills and experience Engineering degree with 5-9 years of experience Experience working with global teams Hands on experience in Java, Google Cloud Platform and DevOps Developer tools & Practices:Integrated Development Environment (IDE):Such as IntelliJ IDEA or Eclipse for Java development, and Visual Studio Code or WebStorm for React development. Version Control:Git for managing source code and collaborating with team members. Build Tools:Maven or Gradle for managing dependencies and building Java projects, and Webpack for bundling and optimizing React applications. Testing Frameworks:JUnit for unit testing Java code, Jest and Enzyme for testing React components, and Selenium for automated browser testing. Agile Methodologies:Practices like Scrum or Kanban for iterative and collaborative software development. Continuous Integration/Continuous Deployment (CI/CD):Tools like Jenkins, Travis CI, or GitLab CI/CD for automating the build, testing, and deployment processes Good working knowledge of various async messaging streams such as Kafka, IBM MQs etc. Good understanding of implementing various design patterns to improve application performance. Good understanding of various Object-Oriented Design principles Knowledge of Compute Engine for virtual machines, Cloud Storage for object storage, and Cloud Functions for serverless computing, GKE for GCP is desirable. Strong stakeholder management skills and the ability to communicate at senior level. Proven experience of delivering results in matrixed organizations under pressure and tight timescales Excellent verbal, interpersonal and written communication skills. Bachelors degree in computer science or a related field Ability to work in a dynamic, agile environment leading/working with geographically distributed teams. How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 3 months ago
6 - 11 years
13 - 18 Lacs
Mumbai
Work from Office
Job Summary This position provides leadership in full systems life cycle management (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.) to ensure delivery is on time and within budget. He/She directs component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements and ensure compliance. This position develops and leads AD project activities and integrations. He/She guides teams to ensure effective communication and achievement of objectives. This position researches and supports the integration of emerging technologies. He/She provides knowledge and support for applications development, integration, and maintenance. This position leads junior team members with project related activities and tasks. He/She guides and influences department and project teams. This position facilitates collaboration with stakeholders. Responsibilities: GCP services (like Big Query, GKE, Spanner, Cloud run, Data flow etc.,) Angular, Java (Rest Api), SQL, Python, Terraforms, Azure DevOps CICD Pipelines Leads systems analysis and design. Leads design and development of applications. Develops and ensures creation of application documents. Defines and produces integration builds. Monitors emerging technology trends. Leads maintenance and support. Primary Skills: Minimum 5 years Java-Springboot/J2EE (Full Stack Developer) Minimum 2 years in GCP platform (Cloud PubSub, GKE, BigQuery) - Experience in BigTable and Spanner will be a plus Working in Agile environment, CI/CD experience. Qualifications: Bachelors Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics, or related field - Preferred Employee Type: Permanent
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Pune
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google Cloud Data Services, Python (Programming Language), Apache Airflow, Data Engineering Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support business needs and enable data-driven decision-making. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Design and develop data solutions for data generation, collection, and processing. Create and maintain data pipelines to ensure efficient data flow. Implement ETL (extract, transform, load) processes to migrate and deploy data across systems. Ensure data quality and integrity by performing data validation and cleansing. Collaborate with cross-functional teams to understand data requirements and provide technical expertise. Optimize data infrastructure and performance to support business needs. Troubleshoot and resolve data-related issues in a timely manner. Professional & Technical Skills: Must Have Skills:Strong Proficiency in Python (Programming Language), Apache Airflow and Google Cloud Data Services. Must Have Skills:3+ years' experience in python programming experience in complex data structures as well as data pipeline development. Must Have Skills:3+ years' experience in python libraires Airflow, Pandas, Py Spark, Redis, SQL (or similar libraries). Must Have Skills:3+ Strong SQL programming experience in Mid/Advance functions like Aggregate Functions (SUM, AVG), Conditional Functions (CASE WHEN, NULLIF), Mathematical Functions (ROUND, ABS), Ranking Functions (RANK) and Windowing Functions etc. Nice to Have Skills:3+ years' experience in Dataflow, dataproc, Cloud Composer, Big Query, Cloud Storage, GKE, etc. Nice to Have Skills:Alternative to Google Cloud, data pipeline development using python for 3+ years for any other cloud platform can be considered. Additional Information: The candidate should have a minimum of 3 years of experience in Google Cloud Data Services. This position is based in Pune. A 15 years full-time education is required. Qualifications 15 years full time education
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Pune
Work from Office
Job Title:GCP Cloud Engineer, AS Location:Pune, India Role Description About the business area: The Chief Information Security Office (CISO) is responsible for addressing information security risks to the Deutsche Bank global IT. As a GCP Cloud Engineer, you will play a critical role in our security Engineering team, focusing on security monitoring, threat remediation, and cloud infrastructure automation. You will be responsible for designing, implementing, and managing Google Cloud services, secure networking, and multi-SIEM environments. Additionally, you will drive Infrastructure as Code (IaC) adoption, ensuring a secure, scalable, and fully automated cloud security ecosystem. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Deploy, and manage scalable cloud infrastructure solutions on Google Cloud Platform (GCP) using Terraform. Implement and maintain Google DevSecOps best practices, ensuring security is embedded throughout the development lifecycle and compliance is upheld. Develop and maintain Infrastructure as Code (IaC) scripts and templates to automate cloud resource provisioning and SIEM configurations. Manage and optimize GitHub Actions & Runners for seamless CI/CD pipeline automation. Work cross-functionally to enhance existing integration automation and workflows. Oversee Incident & Problem Management, Change & Release Management, Vendor Management, and Capacity Planning to ensure platform reliability. Diagnose and resolve technical issues efficiently while implementing proactive solutions to minimize recurrence. Collaborate withsecurity vendorsand integrate third-party security and monitoring tools with GCP. Ensure compliance with security policies, standards, and best practices. Troubleshoot and resolve infrastructure-related issues in a timely manner. Advocate and implement GCP best practices to optimize infrastructure. Document infrastructure processes, configurations, and operational workflows for improved knowledge sharing and process efficiency. Your skills and experience The ideal candidate holds a degree in Computer Science, Engineering, Information Technology, or a related field, with a minimum of 5+ years of hands-on experience in Google Cloud security, DevSecOps practices, infrastructure automation, and third-party security tool integrations within the Google Cloud environment. Additionally, candidates should have recent experience in: Cloud Security Engineering, including identity and access management (IAM), secure networking, and infrastructure as code (IaC). CI/CD pipeline management, security automation, and the integration of third-party security and monitoring tools within Google Cloud Platform (GCP). Building and managing scalable infrastructure and security platforms. SIEM tools and technologies, such as Google SecOps, Splunk, etc. Deploying applications and managing cloud infrastructure on GCP with a strong focus on security and compliance. Hands-on experience with Terraform Enterprise (TFE), and GitHub Actions for infrastructure automation. In-depth knowledge of Google Cloud services (GCE, IAM, VPC, SCC, Logging, GKE, KMS) and best practices for cloud security. Experience in data ingestion from GCP services into SIEM for monitoring and threat detection. Strong understanding of Google DevSecOps principles and security compliance frameworks. Ability to troubleshoot, diagnose, and prevent incidents in cloud infrastructure. Independent, proactive, and self-motivated approach to problem-solving and security threat mitigation. Strong collaboration skills to work effectively with vendors, cloud architects, security teams, and third-party service providers. Excellent written and verbal communication skills to document security processes and collaborate across teams. A passion for cybersecurity with a strong aptitude for identifying and solving security challenges in cloud environments. Preferred Qualifications: Cloud Certifications Google Associate Cloud Engineer, Terraform associate, Google Professional Cloud Security Engineer. Experience with container orchestration platforms such as Kubernetes. Expertise in Git version control, including advanced repository management and GitHub Actions workflow configuration and familiarity with scripting languages like Python, PowerShell, or Bash. Prior experience working in financial institutions (e.g., banking, insurance, fintech, etc.). Familiarity with logging, monitoring, and observability tools in GCP (e.g., Cloud Logging, Cloud Monitoring, Google SecOps). How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 3 months ago
4 - 9 years
6 - 11 Lacs
Pune
Work from Office
Job Title:Application Owner (ITAO) GCP Corporate Title:AVP Location:Pune, India Role Description The IT Application Owner (ITAO) is responsible for Application Management and Governance task. They follow several possible service delivery approaches, acknowledge interference with the IT applications life cycle and assist with incorporating the adopted approach into best practice. The ITAO is aware of the gap in the current infrastructure solutions and where industry innovations are along the maturity lifecycle.They work with application stakeholders to improve the infrastructure, ensuring compliance with the technical roadmap. The ITAO has a sound knowledge of development methodologies and the IT policies necessary to perform effectively in the organization, aligned to the banks appetite for risk. The ITAO acts to improve safety and security of the application, compliance with regulations, policies, and standards,enhance operational readiness, and ease maintenance of the environment for delivering change into production. The ITAO supports the banks audit function in the remediation of audit points and self-identified issues to reduce risk. The ITAO is responsible for producing and maintaining accurate documentation on compliance with methodologies, IT policies and IT security requirements. The ITAO interacts with and influences colleagues on the governance of IT platform reliability and resilience. The candidate should have experience in Spark and GCP technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Enterprise IT governance:Reviews current and proposed information systems for compliance with the organizations obligations (including regulatory, contractual, and agreed standards/policies) and adherence to overall strategy. Engages with project management to confirm that products developed meet the service acceptance criteria and are to the required standard. Perform the application lifecycle management and the strategic application planning. Initiate and deliver technical projects / critical technology road map to maintain the existing services and services levels. Problem management:Ensures that appropriate action is taken to anticipate, investigate and resolve problems in systems and services. Requirements definition and management:Assists in the definition and management of nonfunctional requirements. Application support:Drafts and maintains procedures and documentation for applications support. Provide the 3rd level application support. Incident management:Ensures that incidents are handled according to agreed procedures. Ensure the smooth transition of the applications into production. Asset management:Applies tools, techniques, and processes to create and maintain an accurate asset register. Information security:Communicates information security risks and issues to relevant stakeholders. Plan for Application Hardware / Software / License upgrades or migration activities to align to the compliant platforms. Support application software projects. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate provides. Contribute to planning and continuous improvement activities & support PO, Developers and Scrum Master. Your skills and experience Engineer with experience in Google Cloud platform for at least 4 years. Strong Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Hands on experience with technologies such as Bigquery, Dataproc, Composer, Terraform, GKE, Cloud SQL and Cloud functions. Hands own experience on Unix/ Linux environment Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of Devops . Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platforms:OpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR. Preferably, experience in Java Programing Language Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members. Banking / Financial industry Exposure is a plus. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 3 months ago
6 - 11 years
8 - 14 Lacs
Hyderabad
Work from Office
Primary Skills Proven experience with Google Cloud Platform (GCP) services (e.g., Compute Engine, Kubernetes Engine, Cloud Storage, Cloud Pub/Sub, BigQuery). Strong experience with DevOps practices, CI/CD pipelines, and automation tools. Proficiency in Infrastructure as Code (IaC) tools such as Terraform, Cloud Deployment Manager, or Ansible. Hands-on experience with containerization (Docker) and container orchestration (Kubernetes, GKE). Strong scripting skills (e.g., Bash, Python, Go, or Shell scripting). Experience with monitoring, logging, and alerting using GCP-native or third-party tools (e.g., Cloud Monitoring, Stackdriver, Prometheus, Grafana). Experience with version control systems like Git. Solid understanding of networking, load balancing, and security best practices in cloud environments. Familiarity with database management and storage solutions on GCP (Cloud SQL, BigQuery, Cloud Spanner). GCP certification (e.g., Google Cloud Professional DevOps Engineer, Google Cloud Architect). Experience with configuration management tools like Chef, Puppet, or SaltStack. Knowledge of serverless computing on GCP (e.g., Cloud Functions, App Engine). Understanding of microservices architecture and how it applies to cloud environments. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Secondary Skills Familiarity with Agile development methodologies and Scrum/Kanban practices. Experience with configuration management tools like Chef, Puppet, or SaltStack. Knowledge of serverless computing on GCP (e.g., Cloud Functions, App Engine). Understanding of microservices architecture and how it applies to cloud environments.
Posted 3 months ago
6 - 11 years
8 - 14 Lacs
Noida
Work from Office
Primary Skills Proven experience as a Java Developer with strong experience in Java (Java 8 or higher). Hands-on experience with Google Cloud Platform (GCP) services such as Compute Engine, GKE (Google Kubernetes Engine), Cloud Functions, Cloud Pub/Sub, and Cloud SQL. Proficiency in Java frameworks like Spring Boot and Hibernate for building robust applications. Strong experience with microservices architecture and RESTful APIs. Familiarity with containerization (Docker) and orchestration using Kubernetes (GKE). Solid understanding of database management and cloud storage solutions (Cloud SQL, Firestore, BigQuery). Experience with CI/CD tools like Jenkins, Cloud Build, GitLab CI, or similar. Strong knowledge of cloud security practices, IAM (Identity and Access Management), and application security. Experience with version control tools like Git. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Secondary Skills Familiarity with big data processing and tools like BigQuery and Dataflow. Knowledge of cloud-native design patterns and best practices. Familiarity with messaging and event-driven architectures (e.g., Cloud Pub/Sub, Apache Kafka). Knowledge of monitoring and logging tools like Stackdriver, Prometheus, or Grafana.
Posted 3 months ago
10 - 16 years
10 - 20 Lacs
Mumbai Suburbs, Mumbai, Delhi
Work from Office
Education B.E./B.Tech/MCA in Computer Science Experience 3 to 6 Years of Experience in Kubernetes/GKE/AKS/OpenShift Administration Mandatory Skills ( Docker and Kubernetes) Should have good understanding of various components of various types of kubernetes clusters (Community/AKS/GKE/OpenShift) Should have provisioning experience of various type of kubernetes clusters (Community/AKS/GKE/OpenSHIFT) Should have Upgradation and monitoring experience of variouos type of kubernetes clusters (Community/AKS/GKE/OpenSHIFT) Should have good experience on Conatiner Security Should have good experience of Container storage Should have good experience on CICD workflow (Preferable Azure DevOps, Ansible and Jenkin) Should have goood experiene / knowlede of cloud platforms preferably Azure / Google / OpenStack Should have good experience of container runtimes like docker/cotainerd Should have basic understanding of application life cycle management on container platform Should have good understatning of container registry Should have good understanding of Helm and Helm Charts Should have good understanding of container monitoring tools like Prometheus, Grafana and ELK Should have good exeperince on Linux operating system Should have basis understanding of enterprise networks and container networks Should able to handle Severity#2 and Severity#3 incidents Good communication skills Should have capability to provide the support Should have analytical and problem solving capabilities, ability to work with teams Should have experince on 24*7 operation support framework) Should have knowledge of ITIL Process Preferred Skills/Knowledge Container Platforms - Docker, Kubernetes, GKE, AKS OR OpenShift Automation Platforms - Shell Scripts, Ansible, Jenkin Cloud Platforms - GCP/AZURE/OpenStack Operating System - Linux/CentOS/Ubuntu Container Storage and Backup Desired Skills 1. Certified Kubernetes Administrator OR 2. Certified Redhat OpenShift Administrator 3. Certification of administration of any Cloud Platform will be an added advantage Soft Skills 1. Must have good troubleshooting skills 2. Must be ready to learn new technologies and acquire new skills 3. Must be a Team Player 4. Should be good in Spoken and Written English
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Pune
Work from Office
About The Role :: Job Title:Lead Engineer Location:Pune, India Role Description We are seeking a highly motivated and experienced engineer with a strong foundation in Containers, Google Kubernetes Engine (GKE), Anthos & GCP to join our Container platform team. In this critical role, you will be responsible for defining, prioritizing, and delivering exceptional customer experiences for our container orchestration and hybrid/multi-cloud solutions. You will work closely with CSO, SRE and cloud products team to ensure successful product launches and ongoing feature updates. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Assist the Product Owner in building, maintaining technical artifacts which includes but not limited to architecture reference document, IAM policy document, reusable code. Collaborate with engineers to prioritize the product backlog, ensuring alignment with the product roadmap and customer needs. Collaborate with CSO teams to define IAM policies, guardrails following the principle of least privilege. Work with FinOps team to optimize cloud budgets for container platforms. Collaborate with SRE team and engineering teams in troubleshooting helping CIOs team to unblock special cases. Work closely with stakeholders to gather and incorporate feedback on product features and functionality. Monitor product performance and gather user feedback to identify areas for improvement. Analyze product usage data and key performance indicators (KPIs) to measure product success and identify opportunities for optimization. Stay abreast of the latest developments in cloud native technologies and GKE best practices. Your skills and experience Qualifications: Experience: Overall 12+ years of experience working in technology\infrastructure with 5+ years of experience as a platform engineer\platform lead on container platform. Technical Expertise: 5+ years of hands-on experience working on containerization technologies - Docker and Kubernetes, GKE. 3+years of hands-on experience working on GCP, Anthos and hybrid-cloud environments. 3+years of hands-on experience with CICD tools like GitHub Actions or equivalent, IAC tools like Terraform, Linux & shell scripting. CKA certification along with the above experiences would be preferred. Soft Skills: Data-driven decision-making and a customer-centric approach. Passion for technology and a strong desire to learn and grow. Lead and mentor cross-functional teams, fostering a collaborative and innovative work environment. Be a brand ambassador of the container platform within the bank. Benefits: Competitive salary and benefits package. Opportunity to work on cutting-edge technologies. Collaborative and innovative work environment. Opportunities for professional development and growth. How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 3 months ago
1 - 6 years
8 - 18 Lacs
Mumbai Suburbs, Mumbai, Mumbai (All Areas)
Work from Office
Broad objective: Calculate reserves for the business by LOB, assess Ultimate Loss Ratio (ULR) for all segments to ensure adequate reserves are built in. Responsibility Area Key Responsibilities: - Collate the data required to conduct the analysis. Review the data to ensure accuracy. - Conduct assumption analysis of the Non-Motor TP segment to ensure accuracy. - Perform Reserve Adequacy Analysis to highlight any foreseeable risk. Share analysis with insights with the stakeholders to enable key business decisions. - Accuracy of data - Timely completion and submission of Actuarial deliverables - Internal customer satisfaction based on quality of service provided / interactions / support Capital Modelling: - Review the data inputs received from the Finance department on the Asset Liability Management and the Capital Modelling work to ensure accuracy. - Assess the methodology used and its appropriateness. Add parameters to the model, where required. - Improve predictability of the model by using appropriate assumptions. - Review results to ensure accuracy of calculations. - Identify and highlight key risks to the Appointed Actuary. - Accuracy of data - Timely completion and submission of Actuarial deliverables - Timely and Accurate - Highlight the impact of key internal and external changes to the model and its output. - Review the capital model output to check for reasonability. - Submit the report to the regulator in a timely and accurate manner. - Perform Solvency Simulation Modelling and report to the Bank to assess the capital adequacy. - Conduct a check on solvency and prepare forecasts for Budget Loss Ratios. submission of reports as requested by Auditors and Regulator Regulatory Reporting: - Collate data and prepare analysis for the relevant sections of the Financial Condition Report and the IBNR report. - Review analysis to ensure accuracy of calculations. - Monitor queries from IRDA and ensure responses are provided in a timely and accurate manner. - Work closely with the peer reviewers, statutory auditors and Comptroller and Auditor General (CAG) auditors to ensure they receive all the required data in a timely manner. - Timely and Accurate submission of reports as requested by Auditors and Regulator Ad HocAnalysis: - Work with the stakeholders to understand the request. - Collate the data required to perform the analysis. Maintain a database for future ad hoc requests. - Prepare the analysis and share the reports in a timely and accurate manner. - Review the reports to ensure accuracy of data and calculations. - Accuracy of data Internal customer satisfaction based on quality of service provided / interactions / support
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2