Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2 - 6 years
7 - 11 Lacs
Bengaluru
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact! IBM’s Cloud Services are focused on supporting clients on their cloud journey across any platform to achieve their business goals. It encompasses Cloud Advisory, Architecture, Cloud Native Development, Application Portfolio Migration, Modernization, and Rationalization as well as Cloud Operations. Cloud Services supports all public/private/hybrid Cloud deployments: IBM Bluemix/IBM Cloud/Red Hat/AWS/ Azure/Google and client private environments. Cloud Services has the best Cloud developer architect Complex SI, Sys Ops and delivery talent delivered through our GEO CIC Factory model. As a member of our Cloud Practice you will be responsible for defining and implementing application cloud migration, modernisation and rationalisation solutions for clients across all sectors. You will support mobilisation and help to lead the quality of our programmes and services, liaise with clients and provide consulting services including: Create cloud migration strategies; defining delivery architecture, creating the migration plans, designing the orchestration plans and more. Assist in creating and executing of migration run books Evaluate source cloud (Physical Virtual and Cloud) and target Workloads. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud FunctionCloud data engineers with GCP PDE certification and working experience with GCP. Building end to end data pipelines in GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud Function Experience in logging and monitoring of GCP services and Experience in Terraform and infrastructure automation. Expertise in Python coding language Develops data engineering solutions on Google Cloud ecosystem and Supports and maintains data engineering solutions on Google Cloud ecosystem Preferred technical and professional experience Stay updated with the latest trends and advancements in cloud technologies, frameworks, and tools. Conduct code reviews and provide constructive feedback to maintain code quality and ensure adherence to best practices. Troubleshoot and debug issues, and deploy applications to the cloud platform
Posted 2 months ago
3 - 8 years
11 - 16 Lacs
Pune
Work from Office
About The Role : Job TitleLead Engineer LocationPune, India Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank What we'll offer you: As part of our flexible scheme, here are just some of the benefits that you'll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities: The candidate is expected to; Hands-on engineering lead involved in analysis, design, design/code reviews, coding and release activities Champion engineering best practices and guide/mentor team to achieve high performance. Work closely with Business stakeholders, Tribe lead, Product Owner, Lead Architect to successfully deliver the business outcomes. Acquire functional knowledge of the business capability being digitized/re-engineered. Demonstrate ownership, inspire others, innovative thinking, growth mindset and collaborate for success. Your Skills & Experience: Minimum 15 years of IT industry experience in Full stack development Expert in Java, Spring Boot, NodeJS, SQL/PLSQL, ReactJS, Strong experience in Big data processing Apache Spark, Hadoop, Bigquery, DataProc, Dataflow etc Strong experience in Kubernetes, OpenShift container platform Experience with Databases Oracle, PostgreSQL, MongoDB, Redis/hazelcast, should understand data modeling, normalization, and performance optimization Experience in message queues (RabbitMQ/IBM MQ, JMS) and Data streaming i.e. Kafka, Pub-sub etc Experience of working on public cloud GCP preferred, AWS or Azure Knowledge of various distributed/multi-tiered architecture styles Micro-services, Data mesh, Integration patterns etc Experience on modern software product delivery practices, processes and tooling and BIzDevOps skills such as CI/CD pipelines using Jenkins, Git Actions etc Experience on designing solutions, based on DDD and implementing Clean / Hexagonal Architecture efficient systems that can handle large-scale operation Experience on leading teams and mentoring developers Focus on quality experience with TDD, BDD, Stress and Contract Tests Proficient in working with APIs (Application Programming Interfaces) and understand data formats like JSON, XML, YAML, Parquet etc Key Skills: Java Spring Boot NodeJS SQL/PLSQL ReactJS Advantageous: Having prior experience in Banking/Finance domain Having worked on hybrid cloud solutions preferably using GCP Having worked on product development How we'll support you: Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 months ago
4 - 8 years
10 - 15 Lacs
Pune
Work from Office
About The Role : Job Title - GCP - Senior Engineer - PD Location - Pune Role Description Our team is part of the area Technology, Data, and Innovation (TDI) Private Bank. Within TDI, Partner data is the central client reference data system in Germany. As a core banking system, many banking processes and applications are integrated and communicate via >2k interfaces. From a technical perspective, we focus on mainframe but also build solutions on premise cloud, restful services, and an angular frontend. Next to the maintenance and the implementation of new CTB requirements, the content focus also lies on the regulatory and tax topics surrounding a partner/ client. We are looking for a very motivated candidate for the Big Data and Google Cloud area. What we'll offer you As part of our flexible scheme, here are just some of the benefits that you'll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities You are responsible for the implementation of the new project on GCP (Spark, Dataproc, Dataflow, BigQuery, Terraform etc) in the whole SDLC chain You are responsible for the support of the migration of current functionalities to Google Cloud You are responsible for the stability of the application landscape and support software releases You also support in L3 topics and application governance You are responsible in the CTM area for coding as part of an agile team (Java, Scala, Spring Boot) Your skills and experience You have experience with databases (HDFS, BigQuery, etc.) and development preferably for Big Data and GCP technologies Strong understanding of Data Mesh Approach and integration patterns Understanding of Party data and integration with Product data Your architectural skills for big data solutions, especially interface architecture allows a fast start You have experience in at leastSpark, Java ,Scala and Python, Maven, Artifactory, Hadoop Ecosystem, Github Actions, GitHub, Terraform scripting You have knowledge in customer reference data, customer opening processes and preferably regulatory topics around know your customer processes You can work very well in teams but also independent and are constructive and target oriented Your English skills are good and you can both communicate professionally but also informally in small talks with the team How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 months ago
3 - 7 years
13 - 18 Lacs
Pune
Work from Office
About The Role : Job Title Technical-Specialist Big Data (PySpark) Developer LocationPune, India Role Description This role is for Engineer who is responsible for design, development, and unit testing software applications. The candidate is expected to ensure good quality, maintainable, scalable, and high performing software applications getting delivered to users in an Agile development environment. Candidate / Applicant should be coming from a strong technological background. The candidate should have goo working experience in Python and Spark technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should be able to technically guide and mentor junior resources in the team. As a developer you will bring extensive design and development skills to enforce the group of developers within the team. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy. Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design and discuss your own solution for addressing user stories and tasks. Develop and unit-test, Integrate, deploy, maintain, and improve software. Perform peer code review. Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate / applicant provides. Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master. Your skills and experience Engineer with Good development experience in Big Data platform for at least 5 years. Hands own experience in Spark (Hive, Impala). Hands own experience in Python Programming language. Preferably, experience in BigQuery , Dataproc , Composer , Terraform , GKE , Cloud SQL and Cloud functions. Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of DevOps. Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platformsOpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs.
Posted 2 months ago
2 - 6 years
7 - 11 Lacs
Bengaluru
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact! IBM’s Cloud Services are focused on supporting clients on their cloud journey across any platform to achieve their business goals. It encompasses Cloud Advisory, Architecture, Cloud Native Development, Application Portfolio Migration, Modernization, and Rationalization as well as Cloud Operations. Cloud Services supports all public/private/hybrid Cloud deployments: IBM Bluemix/IBM Cloud/Red Hat/AWS/ Azure/Google and client private environments. Cloud Services has the best Cloud developer architect Complex SI, Sys Ops and delivery talent delivered through our GEO CIC Factory model. As a member of our Cloud Practice you will be responsible for defining and implementing application cloud migration, modernisation and rationalisation solutions for clients across all sectors. You will support mobilisation and help to lead the quality of our programmes and services, liaise with clients and provide consulting services including: Create cloud migration strategies; defining delivery architecture, creating the migration plans, designing the orchestration plans and more. Assist in creating and executing of migration run books Evaluate source cloud (Physical Virtual and Cloud) and target Workloads. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud FunctionCloud data engineers with GCP PDE certification and working experience with GCP. Building end to end data pipelines in GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud Function Experience in logging and monitoring of GCP services and Experience in Terraform and infrastructure automation. Expertise in Python coding language Develops data engineering solutions on Google Cloud ecosystem and Supports and maintains data engineering solutions on Google Cloud ecosystem Preferred technical and professional experience Stay updated with the latest trends and advancements in cloud technologies, frameworks, and tools. Conduct code reviews and provide constructive feedback to maintain code quality and ensure adherence to best practices. Troubleshoot and debug issues, and deploy applications to the cloud platform
Posted 2 months ago
5 - 7 years
0 - 0 Lacs
Bengaluru
Work from Office
Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Data Analysis and Modeling: - Perform exploratory data analysis (EDA) to uncover insights and inform model development. - Develop, validate, and deploy machine learning models using Python and relevant libraries (e.g., scikit-learn, TensorFlow, PyTorch). - Implement statistical analysis and hypothesis testing to drive data-driven decision-making. Data Engineering: - Design, build, and maintain scalable data pipelines to process and transform large datasets. - Collaborate with data engineers to ensure data quality and system reliability. - Optimize data storage solutions for efficient querying and analysis. Software Development: - Write clean, maintainable, and efficient code in Python. - Develop APIs and integrate machine learning models into production systems. - Implement best practices for version control, testing, and continuous integration. Gene AI: - Utilize Gene AI tools and frameworks to enhance data analysis and model development. - Integrate Gene AI solutions into existing workflows and systems. - Stay updated with the latest advancements in Gene AI and apply them to relevant projects. Collaboration and Communication: - Work closely with cross-functional teams to understand business needs and translate them into technical requirements. - Communicate findings and insights to both technical and non-technical stakeholders. - Provide mentorship and guidance to junior team members. Required Skills Python,Api,Tensorflow,Gene AI
Posted 2 months ago
6 - 11 years
8 - 14 Lacs
Pune
Work from Office
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. About The Role Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. About The Role - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management
Posted 2 months ago
3 - 7 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google BigQuery, Google Dataproc Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. This role requires strong leadership skills and the ability to collaborate effectively with cross-functional teams. Roles & Responsibilities: Lead the effort to design, build, and configure applications. Act as the primary point of contact for all application-related matters. Collaborate with cross-functional teams to ensure successful implementation of applications. Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Manage and prioritize tasks to meet project deadlines. Provide technical guidance and mentorship to junior team members. Professional & Technical Skills: - Must To Have Skills:Proficiency in Google Kubernetes Engine, Kubernetes, Google BigQuery, Google Dataproc. - Strong understanding of containerization and orchestration using Google Kubernetes Engine. - Experience with Google Cloud Platform services such as Google BigQuery and Google Dataproc. - Hands-on experience in designing and implementing scalable and reliable applications using Google Kubernetes Engine. - Solid understanding of microservices architecture and its implementation using Kubernetes. - Familiarity with CI/CD pipelines and tools such as Jenkins or GitLab. Additional Information: - The candidate should have a minimum of 3 years of experience in Google Kubernetes Engine. - This position is based at our Bengaluru office. - A 15 years full-time education is required. Qualifications 15 years full time education
Posted 2 months ago
7 - 8 years
15 - 25 Lacs
Chennai
Work from Office
Assistant Manager - Data Engineering: Job Summary: We are seeking a Lead GCP Data Engineer with experience in data modeling and building data pipelines. The ideal candidate should have hands-on experience with GCP services such as Composer, GCS, GBQ, Dataflow, Dataproc, and Pub/Sub. Additionally, the candidate should have a proven track record in designing data solutions, covering everything from data integration to end-to-end storage in bigquery. Responsibilities: Collaborate with Client's Data Architect: Work closely with client data architects and technical teams to design and develop customized data solutions that meet business requirements. Design Data Flows: Architect and implement data flows that ensure seamless data movement from source systems to target systems, facilitating real-time or batch data ingestion, processing, and transformation. Data Integration & ETL Processes: Design and manage ETL processes, ensuring the efficient integration of diverse data sources and high-quality data pipelines. Build Data Products in GBQ: Work on building data products using Google BigQuery (GBQ), designing data models and ensuring data is structured and optimized for analysis. Stakeholder Interaction: Regularly engage with business stakeholders to gather data requirements and translate them into technical specifications, building solutions that align with business needs. Ensure Data Quality & Security: Implement best practices in data governance, security, and compliance for both storage and processing of sensitive data. Continuous Improvement: Evaluate and recommend new technologies and tools to improve data architecture, performance, and scalability. Skills: 6+ years of development experience 4+ years of experience with SQL, Python 2+ GCP BigQuery, DataFlow, GCS, Postgres 3+ years of experience building out data pipelines from scratch in a highly distributed and fault-tolerant manner. Experience with CloudSQL, Cloud Functions and Pub/Sub, Cloud Composer etc., Familiarity with big data and machine learning tools and platforms. Comfortable with open source technologies including Apache Spark, Hadoop, Kafka. Comfortable with a broad array of relational and non-relational databases. Proven track record of building applications in a data-focused role (Cloud and Traditional Data Warehouse) Current or previous experience leading a team. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Posted 2 months ago
12 - 20 years
30 - 45 Lacs
Hyderabad
Hybrid
Job Description: We are seeking a highly experienced Data Architect with 15-20 years of experience to lead the design and implementation of data solutions at scale. The ideal candidate will have deep expertise in cloud technologies, particularly GCP, along with a broad skill set in SQL, BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, DLP, Dataproc, Cloud Composer, Python, ETL, and big data technologies like MapR/Hadoop, Hive, Spark, and Scala. Key Responsibilities: Lead the design and implementation of complex data architectures across cloud platforms, ensuring scalability, performance, and cost-efficiency. Architect data solutions using Google Cloud Platform (GCP) services such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, Dataproc, Cloud Composer, and DLP. Design and optimize ETL - Abinitio processes and data pipelines using Python and related technologies, ensuring seamless data integration across multiple systems. Work with big data technologies including Hadoop (MapR), Hive, Spark, and Scala to build and manage large-scale, distributed data systems. Oversee the end-to-end data flow from ingestion to processing, transformation, and storage, ensuring high availability and disaster recovery. Lead and mentor a team of engineers, guiding them in adopting best practices in data architecture, security, and governance. Define and enforce data governance, security, and compliance standards to ensure data privacy and integrity. Collaborate with cross-functional teams to understand business requirements and translate them into data architecture and technical solutions. Design and implement data lake, data warehouse, and analytics solutions to support business intelligence and advanced analytics. Lead the integration of cloud-native tools and services for real-time and batch processing, using Pub/Sub, Dataproc, and Cloud Composer. Conduct performance tuning and optimization for SQL, BigQuery, and big data technologies to ensure efficient query execution and resource usage. Provide strategic direction on new data technologies, trends, and best practices to ensure the organization remains competitive and innovative. Required Skills: 15-20 years of experience in data architecture, data engineering, or related roles, with a focus on cloud solutions. Extensive experience with Google Cloud Platform (GCP) services, particularly BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, Dataproc, Cloud Composer, and DLP. Strong Experience in ETL - Abinitio. Proficient in SQL and experience with cloud-native data storage and processing technologies (BigQuery, Hive, Hadoop, Spark). Expertise in Python for ETL pipeline development and data manipulation. Solid understanding of big data technologies such as MapR, Hadoop, Hive, Spark, and Scala. Experience in designing and implementing scalable, high-performance data architectures and data lakes/warehouses. Deep understanding of data governance, security, privacy (DLP), and compliance standards. Proven experience in leading teams and delivering large-scale data solutions in cloud environments. Excellent problem-solving, communication, and leadership skills. Ability to work with senior business and technical leaders to align data solutions with organizational goals. Preferred Skills: Experience with other cloud platforms (AWS, Azure). Knowledge of machine learning and AI data pipelines. Familiarity with containerized environments and orchestration tools (e.g., Kubernetes). Experience with advanced analytics or data science initiatives.
Posted 2 months ago
15 - 24 years
30 - 45 Lacs
Pune
Hybrid
Minimum of 5 years of experience in a Data Architect role, supporting warehouse and Cloud data platforms/environments. Experience with common GCP services such as BigQuery, Dataflow, dataproc, GCS, Cloud Function and related CI/CD processes
Posted 2 months ago
15 - 24 years
30 - 45 Lacs
Bengaluru
Hybrid
Minimum of 5 years of experience in a Data Architect role, supporting warehouse and Cloud data platforms/environments. Experience with common GCP services such as BigQuery, Dataflow, dataproc, GCS, Cloud Function and related CI/CD processes
Posted 2 months ago
3 - 8 years
5 - 10 Lacs
Bengaluru
Work from Office
Project Role : Cloud Services Engineer Project Role Description : Act as liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Must have skills : SUSE Linux Administration Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Services Engineer, you will act as a liaison between the client and Accenture operations teams for support and escalations. You will communicate service delivery health to all stakeholders, explain any performance issues or risks, and ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work related problems. Ensure effective communication between client and Accenture operations teams. Monitor and maintain Cloud orchestration and automation capability. Analyze performance data and trends to identify areas for improvement. Collaborate with stakeholders to address service delivery issues. Implement strategies to optimize service delivery efficiency. Professional & Technical Skills: Must To Have Skills: Proficiency in SUSE Linux Administration. Strong understanding of cloud orchestration and automation technologies. Experience in analyzing performance data and trends. Knowledge of SLAs and service delivery optimization techniques. Additional Information: The candidate should have a minimum of 3 years of experience in SUSE Linux Administration. This position is based at our Bengaluru office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 2 months ago
15 - 20 years
17 - 22 Lacs
Hyderabad
Work from Office
Data Modeler+ Solution Design Job Summary: The GCP Solution Designer will be responsible for designing and implementing robust data warehouse solutions on Google Cloud Platform (GCP). This role requires deep expertise in GCP services, data modelling, ETL processes, and a strong understanding of business requirements to deliver scalable and efficient data solutions. Key Responsibilities: 1. Solution Design and Architecture: Design comprehensive data solutions on GCP, ensuring scalability, performance, and security. Develop data models and schemas to meet business requirements. Collaborate with stakeholders to gather requirements and translate them into technical specifications. 2. Implementation and Development: Implement ETL processes using GCP tools such as Dataflow, Dataproc, and Cloud Data Fusion. Develop and optimize data pipelines to ensure data integrity and performance. Create and manage data storage solutions using BigQuery and Cloud Storage. 3. Data Management and Optimization: Implement best practices for data management, including data governance, quality, and lifecycle management. Optimize query performance and storage costs by leveraging GCP features and tools. 4. Collaboration and Communication: Work closely with data engineers, analysts, and other stakeholders to ensure seamless integration of data solutions. Provide technical guidance and mentorship to junior team members. Communicate complex technical concepts to non-technical stakeholders effectively. 5. Continuous Improvement: Stay updated with the latest advancements in GCP services and data warehousing technologies. Evaluate and recommend new tools and technologies to enhance data solutions. Continuously improve existing data solutions to meet evolving business
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Pune
Work from Office
Project Role :Data Engineer Project Role Description :Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills :Google Cloud Data Services, Python (Programming Language), GCP Dataflow, Apache Airflow Good to have skills :NA Minimum 5 year(s) of experience is required Educational Qualification :15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will be responsible for designing and implementing data solutions that meet the needs of the organization and contribute to its overall success. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Design and develop data pipelines to extract, transform, and load data. Ensure data quality and integrity throughout the data processing lifecycle. Implement ETL processes to migrate and deploy data across systems. Collaborate with cross-functional teams to understand data requirements and design appropriate solutions. Professional & Technical Skills: Must Have Skills:Strong Proficiency in Python (Programming Language), Apache Airflow and Google Cloud Data Services. Must Have Skills:5+ years' experience in python programming experience in complex data structures as well as data pipeline development. Must Have Skills:5+ years' experience in python libraires Airflow, Pandas, Py Spark, Redis, SQL (or similar libraries). Must Have Skills:3+ Strong SQL programming experience in Mid/Advance functions like Aggregate Functions (SUM, AVG), Conditional Functions (CASE WHEN, NULLIF), Mathematical Functions (ROUND, ABS), Ranking Functions (RANK) and Windowing Functions etc. Nice to Have Skills:3+ years' experience in Dataflow, dataproc, Cloud Composer, Big Query, Cloud Storage, GKE, etc. Nice to Have Skills:Alternative to Google Cloud, data pipeline development using python for 3+ years for any other cloud platform can be considered. Strong of experience with one of the leading public clouds. Strong of experience in design and build of salable data pipelines that deal with extraction, transformation, and loading. Strong experience in Data Engineering with an emphasis on Data Warehousing and Data Analytics. Mandatory Experience:years of experience with Python with working knowledge on Notebooks. Mandatory - years working on a cloud data projects Additional Information: The candidate should have a minimum of 5 years of experience in Google Cloud Data Services. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Pune
Work from Office
Project Role :Data Engineer Project Role Description :Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills :Python (Programming Language), Data Engineering, Apache Airflow, SQL Good to have skills :GCP Dataflow Minimum 5 year(s) of experience is required Educational Qualification :15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. You will create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Your typical day will involve designing and developing data solutions, collaborating with teams, and ensuring data integrity and quality. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Design and develop data solutions for data generation, collection, and processing. Create and maintain data pipelines to ensure efficient data flow. Implement ETL processes to migrate and deploy data across systems. Ensure data quality and integrity throughout the data lifecycle. Professional & Technical Skills: Must Have Skills:Strong Proficiency in Python (Programming Language), Apache Airflow and SQL. Must Have Skills:5+ years' experience in python programming experience in complex data structures as well as data pipeline development. Must Have Skills:5+ years' experience in python libraires Airflow, Pandas, Py Spark, Redis, SQL (or similar libraries). Must Have Skills:3+ Strong SQL programming experience in Mid/Advance functions like Aggregate Functions (SUM, AVG), Conditional Functions (CASE WHEN, NULLIF), Mathematical Functions (ROUND, ABS), Ranking Functions (RANK) and Windowing Functions etc. Nice to Have Skills:3+ years' experience in Dataflow, dataproc, Cloud Composer, Big Query, Cloud Storage, GKE, etc. Nice to Have Skills:Alternative to Google Cloud, data pipeline development using python for 3+ years for any other cloud platform can be considered. Strong of experience with one of the leading public clouds. Strong of experience in design and build of salable data pipelines that deal with extraction, transformation, and loading. Strong experience in Data Engineering with an emphasis on Data Warehousing and Data Analytics. Mandatory Experience:years of experience with Python with working knowledge on Notebooks. Mandatory - years working on a cloud data projects Strong understanding of statistical analysis and machine learning algorithms. Additional Information: The candidate should have a minimum of 5 years of experience in Python (Programming Language). This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 3 months ago
15 - 20 years
17 - 20 Lacs
Pune
Work from Office
Job Description: Job Title: Lead Engineer, VP Location: Pune, India Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank What we'll offer you As part of our flexible scheme, here are just some of the benefits that you'll enjoy Your Key Responsibilities The candidate is expected to; Hands-on engineering lead involved in analysis, design, design/code reviews, coding and release activities Champion engineering best practices and guide/mentor team to achieve high performance. Work closely with Business stakeholders, Tribe lead, Product Owner, Lead Architect to successfully deliver the business outcomes. Acquire functional knowledge of the business capability being digitized/re-engineered. Demonstrate ownership, inspire others, innovative thinking, growth mindset and collaborate for success. Your Skills & Experience Minimum 15 years of IT industry experience in Full stack development Expert in Java, Spring Boot, NodeJS, SQL/PLSQL, ReactJS, Strong experience in Big data processing Apache Spark, Hadoop, Bigquery, DataProc, Dataflow etc Strong experience in Kubernetes, OpenShift container platform Experience with Databases Oracle, PostgreSQL, MongoDB, Redis/hazelcast, should understand data modeling, normalization, and performance optimization Experience in message queues (RabbitMQ/IBM MQ, JMS) and Data streaming i.e. Kafka, Pub-sub etc Experience of working on public cloud GCP preferred, AWS or Azure Knowledge of various distributed/multi-tiered architecture styles Micro-services, Data mesh, Integration patterns etc Experience on modern software product delivery practices, processes and tooling and BIzDevOps skills such as CI/CD pipelines using Jenkins, Git Actions etc Experience on designing solutions, based on DDD and implementing Clean / Hexagonal Architecture efficient systems that can handle large-scale operation Experience on leading teams and mentoring developers Focus on quality experience with TDD, BDD, Stress and Contract Tests Proficient in working with APIs (Application Programming Interfaces) and understand data formats like JSON, XML, YAML, Parquet etc Key Skills Java Spring Boot NodeJS SQL/PLSQL ReactJS Advantageous Having prior experience in Banking/Finance domain Having worked on hybrid cloud solutions preferably using GCP Having worked on product development
Posted 3 months ago
10 - 18 years
30 - 45 Lacs
Chennai
Work from Office
Details on tech stack GCP Services : BigQuery, Cloud Dataflow, Pub/Sub, Dataproc, Cloud Storage. Data Processing : Apache Beam (batch/stream), Apache Kafka, Cloud Dataprep. Programming : Python, Java/Scala, SQL. Orchestration : Apache Airflow (Cloud Composer), Terraform. Security : IAM, Cloud Identity, Cloud Security Command Center. Containerization : Docker, Kubernetes (GKE). Machine Learning : Google AI Platform, TensorFlow, AutoML. Certifications : Google Cloud Data Engineer, Cloud Architect (preferred). Proven ability to design scalable and robust AI/ML systems in production, with a focus on high-performance and cost-effective solutions. Strong experience with cloud platforms (Google Cloud, AWS, Azure) and cloud-native AI/ML services (e.g., Vertex AI, SageMaker). Expertise in implementing MLOps practices, including model deployment, monitoring, retraining, and version control. Strong leadership skills with the ability to guide teams, mentor engineers, and collaborate with cross-functional teams to meet business objectives. Deep understanding of frameworks like TensorFlow, PyTorch, and Scikit-learn for designing, training, and deploying models. Experience with data engineering principles, scalable pipelines, and distributed systems (e.g., Apache Kafka, Spark, Kubernetes). Nice to have requirements to the candidate Strong leadership and mentorship capabilities, guiding teams toward best practices and high-quality deliverables. Excellent problem-solving skills, with a focus on designing efficient, high-performance systems. Effective project management abilities to handle multiple initiatives and ensure timely delivery. Strong emphasis on collaboration and teamwork , fostering a positive and productive work environment.
Posted 3 months ago
4 - 7 years
7 - 13 Lacs
Chennai, Hyderabad
Work from Office
Role & responsibilities Collaborate with senior engineers to design and implement scalable ETL/ELT data pipelines using Python and SQL on GCP platforms. Assist in building data warehouse solutions on BigQuery and optimizing data workflows for efficient processing and storage. Support data migration processes from legacy systems (e.g., Teradata, Hive ) to BigQuery. Work closely with cross-functional teams to understand data requirements, perform data modeling, and develop curation layers for analytics and ML model deployment. Troubleshoot and resolve data processing issues to ensure data accuracy, consistency, and availability. Maintain code quality using Git for version control and collaborate on agile development processes using Jira Preferred candidate profile Strong proficiency in Teradata for data engineering tasks. Strong understanding and experience with distributed computing principles and frameworks like Hadoop, Apache Spark etc. Advanced experience with GCP services, including BigQuery, Dataflow, Cloud Composer (Airflow) , and Dataproc . Expertise in data modeling, ETL/ELT pipeline development, and workflow orchestration using Airflow DAGs. Hands-on experience with data migration from legacy systems ( Teradata, Hive ) to cloud platforms (BigQuery). Familiarity with streaming data ingestion tools like Kafka and NiFi. Strong problem-solving skills and experience with performance optimization in large-scale data environments. Proficiency in CI/CD tools (Jenkins, GitLab) and version control systems (Git). GCP Professional Data Engineer certification.
Posted 3 months ago
4 - 9 years
5 - 15 Lacs
Chennai
Hybrid
Hello Everyone, Greetings from HTC! Position Description: We're seeking a Data Engineer to lead our India-based supplier delivery team in migrating eight Teradata databases to the Google Cloud Platform (GCP). Oversee the entire migration process, ensuring successful data ingestion, quality assurance, and data protection using standard engineering patterns. Responsibilities: Lead the lift and refactor efforts for full data migration. Drive business adoption of the new GCP platform. Oversee the decommissioning of Teradata and related technologies. Collaborate with technical leads, program managers, and other stakeholders to execute the migration plan within an Agile framework. Qualifications: Hands-on experience with GCP services, data pipelines, BigQuery, SQL, and Python. Proven ability to manage technical and process-related requests to maintain project timelines. Strong collaboration and communication skills Skills Required: Python GCP Big Query Postgres GCP Services Kubernetes Skills Preferred: GCP Certification Experience Required: 4+ Notice Period: We are looking for candidates who can join immediately or within 30 days. Please send your resume to: kavitha.sekar@HTCinc.com We appreciate any references you can provide! Thank you, Kavitha
Posted 3 months ago
5 - 8 years
14 - 16 Lacs
Bengaluru
Remote
Hi all, We are hiring for the role Python & GCP engineer Experience: 5+ Years Location: Bangalore Notice Period: Immediate - 15 days Skills: Technical Expertise: Languages: Python, SQL, Shell scripting Big Data: Kafka, PySpark, data warehousing, data lakes Cloud Platforms: GCP: GCS, BigQuery, Pub/Sub, Dataproc, Dataflow, Cloud Functions Database: Schema design, optimization, stored procedures DevOps: CI/CD pipeline implementation, multi-cloud deployment automation Development: Parallel processing, streaming, low-level design If you are interested drop your resume at mojesh.p@acesoftlabs.com Call: 9701971793
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Pune
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google Cloud Data Services, Python (Programming Language), Apache Airflow, Data Engineering Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support business needs and enable data-driven decision-making. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Design and develop data solutions for data generation, collection, and processing. Create and maintain data pipelines to ensure efficient data flow. Implement ETL (extract, transform, load) processes to migrate and deploy data across systems. Ensure data quality and integrity by performing data validation and cleansing. Collaborate with cross-functional teams to understand data requirements and provide technical expertise. Optimize data infrastructure and performance to support business needs. Troubleshoot and resolve data-related issues in a timely manner. Professional & Technical Skills: Must Have Skills:Strong Proficiency in Python (Programming Language), Apache Airflow and Google Cloud Data Services. Must Have Skills:3+ years' experience in python programming experience in complex data structures as well as data pipeline development. Must Have Skills:3+ years' experience in python libraires Airflow, Pandas, Py Spark, Redis, SQL (or similar libraries). Must Have Skills:3+ Strong SQL programming experience in Mid/Advance functions like Aggregate Functions (SUM, AVG), Conditional Functions (CASE WHEN, NULLIF), Mathematical Functions (ROUND, ABS), Ranking Functions (RANK) and Windowing Functions etc. Nice to Have Skills:3+ years' experience in Dataflow, dataproc, Cloud Composer, Big Query, Cloud Storage, GKE, etc. Nice to Have Skills:Alternative to Google Cloud, data pipeline development using python for 3+ years for any other cloud platform can be considered. Additional Information: The candidate should have a minimum of 3 years of experience in Google Cloud Data Services. This position is based in Pune. A 15 years full-time education is required. Qualifications 15 years full time education
Posted 3 months ago
2 - 5 years
4 - 8 Lacs
Pune
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Should have developed/Worked for atleast 1 Gen AI project. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 3 months ago
4 - 9 years
6 - 11 Lacs
Pune
Work from Office
Job Title:Application Owner (ITAO) GCP Corporate Title:AVP Location:Pune, India Role Description The IT Application Owner (ITAO) is responsible for Application Management and Governance task. They follow several possible service delivery approaches, acknowledge interference with the IT applications life cycle and assist with incorporating the adopted approach into best practice. The ITAO is aware of the gap in the current infrastructure solutions and where industry innovations are along the maturity lifecycle.They work with application stakeholders to improve the infrastructure, ensuring compliance with the technical roadmap. The ITAO has a sound knowledge of development methodologies and the IT policies necessary to perform effectively in the organization, aligned to the banks appetite for risk. The ITAO acts to improve safety and security of the application, compliance with regulations, policies, and standards,enhance operational readiness, and ease maintenance of the environment for delivering change into production. The ITAO supports the banks audit function in the remediation of audit points and self-identified issues to reduce risk. The ITAO is responsible for producing and maintaining accurate documentation on compliance with methodologies, IT policies and IT security requirements. The ITAO interacts with and influences colleagues on the governance of IT platform reliability and resilience. The candidate should have experience in Spark and GCP technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Enterprise IT governance:Reviews current and proposed information systems for compliance with the organizations obligations (including regulatory, contractual, and agreed standards/policies) and adherence to overall strategy. Engages with project management to confirm that products developed meet the service acceptance criteria and are to the required standard. Perform the application lifecycle management and the strategic application planning. Initiate and deliver technical projects / critical technology road map to maintain the existing services and services levels. Problem management:Ensures that appropriate action is taken to anticipate, investigate and resolve problems in systems and services. Requirements definition and management:Assists in the definition and management of nonfunctional requirements. Application support:Drafts and maintains procedures and documentation for applications support. Provide the 3rd level application support. Incident management:Ensures that incidents are handled according to agreed procedures. Ensure the smooth transition of the applications into production. Asset management:Applies tools, techniques, and processes to create and maintain an accurate asset register. Information security:Communicates information security risks and issues to relevant stakeholders. Plan for Application Hardware / Software / License upgrades or migration activities to align to the compliant platforms. Support application software projects. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate provides. Contribute to planning and continuous improvement activities & support PO, Developers and Scrum Master. Your skills and experience Engineer with experience in Google Cloud platform for at least 4 years. Strong Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Hands on experience with technologies such as Bigquery, Dataproc, Composer, Terraform, GKE, Cloud SQL and Cloud functions. Hands own experience on Unix/ Linux environment Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of Devops . Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platforms:OpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR. Preferably, experience in Java Programing Language Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members. Banking / Financial industry Exposure is a plus. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Pune
Work from Office
Project Role : Cloud Services Engineer Project Role Description : Act as liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Must have skills : Managed File Transfer Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Services Engineer, you will act as a liaison between the client and Accenture operations teams for support and escalations. You will communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Ensure effective communication between client and operations teams. Analyze service delivery health and address performance issues. Conduct performance meetings to share data and trends. Professional & Technical Skills: Must To Have Skills:Proficiency in Managed File Transfer. Strong understanding of cloud orchestration and automation. Experience in SLA management and performance analysis. Knowledge of IT service delivery and escalation processes. Additional Information: The candidate should have a minimum of 5 years of experience in Managed File Transfer. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2