Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
10 - 19 Lacs
Chennai
Hybrid
Greetings from Getronics! We have permanent opportunities for GCP Data Engineers in Chennai Location . Company : Getronics (Permanent role) Client : Automobile Industry Experience Required : 4+ Years in IT and minimum 3+ years in GCP Data Engineering Location : Chennai (Elcot - Sholinganallur) Work Mode : Hybrid Position Description: We are currently seeking a seasoned GCP Cloud Data Engineer with 3 to 5 years of experience in leading/implementing GCP data projects, preferrable implementing complete data centric model. This position is to design & deploy Data Centric Architecture in GCP for Materials Management platform which would get / give data from multiple applications modern & Legacy in Product Development, Manufacturing, Finance, Purchasing, N-Tier supply Chain, Supplier collaboration Design and implement data-centric solutions on Google Cloud Platform (GCP) using various GCP tools like Storage Transfer Service, Cloud Data Fusion, Pub/Sub, Data flow, Cloud compression, Cloud scheduler, Gutil, FTP/SFTP, Dataproc, BigTable etc. • Build ETL pipelines to ingest the data from heterogeneous sources into our system • Develop data processing pipelines using programming languages like Java and Python to extract, transform, and load (ETL) data • Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets • Deploy and manage databases, both SQL and NoSQL, such as Bigtable, Firestore, or Cloud SQL, based on project requirements infrastructure. Skill Required: - GCP Data Engineer, Hadoop, Spark/Pyspark, Google Cloud Platform (Google Cloud Platform) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Compose, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. - 4+ years of professional experience in: o Data engineering, data product development and software product launches. - 3+ years of cloud data/software engineering experience building scalable, reliable, and cost- effective production batch and streaming data pipelines using: Data warehouses like Google BigQuery. Workflow orchestration tools like Airflow. Relational Database Management System like MySQL, PostgreSQL, and SQL Server. Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub. Education Required: Any Bachelors' degree Candidate should be willing to take GCP assessment (1-hour online video test) LOOKING FOR IMMEDIATE TO 30 DAYS NOTICE CANDIDATES ONLY. Thanks, Durga.
Posted 1 month ago
2.0 - 5.0 years
7 - 11 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Big Query ML Develop, train, evaluate, and deploy machine learning models using Big Query ML. Write complex SQL queries to prepare, transform, and analyze large datasets. Build classification, regression, time-series forecasting, and clustering models within Big Query. Work with business stakeholders to understand and translate them into analytical solutions. Automate ML workflows using scheduled queries, Cloud Composer, or Dataform. Visualize and communicate results to both technical and non-technical stakeholders using Looker, or other BI tools. Optimize performance and cost-efficiency of ML models and queries within BigQuery. Proficiency in SQL and working with large scale data warehousing solutions. Familiarity with ML evaluation metrics, feature engineering, and data preprocessing. Knowledge of Google Cloud Platform (GCP) services like Cloud Storage, Dataflow, Cloud Functions. Strong communication and problem- solving skills.
Posted 1 month ago
0.0 years
9 - 14 Lacs
Noida
Work from Office
Required Skills: GCP Proficiency Strong expertise in Google Cloud Platform (GCP) services and tools. Strong expertise in Google Cloud Platform (GCP) services and tools, including Compute Engine, Google Kubernetes Engine (GKE), Cloud Storage, Cloud SQL, Cloud Load Balancing, IAM, Google Workflows, Google Cloud Pub/Sub, App Engine, Cloud Functions, Cloud Run, API Gateway, Cloud Build, Cloud Source Repositories, Artifact Registry, Google Cloud Monitoring, Logging, and Error Reporting. Cloud-Native Applications Experience in designing and implementing cloud-native applications, preferably on GCP. Workload Migration Proven expertise in migrating workloads to GCP. CI/CD Tools and Practices Experience with CI/CD tools and practices. Python and IaC Proficiency in Python and Infrastructure as Code (IaC) tools such as Terraform. Responsibilities: Cloud Architecture and Design Design and implement scalable, secure, and highly available cloud infrastructure solutions using Google Cloud Platform (GCP) services and tools such as Compute Engine, Kubernetes Engine, Cloud Storage, Cloud SQL, and Cloud Load Balancing. Cloud-Native Applications Design Development of high-level architecture design and guidelines for develop, deployment and life-cycle management of cloud-native applications on CGP, ensuring they are optimized for security, performance and scalability using services like App Engine, Cloud Functions, and Cloud Run. API ManagementDevelop and implement guidelines for securely exposing interfaces exposed by the workloads running on GCP along with granular access control using IAM platform, RBAC platforms and API Gateway. Workload Migration Lead the design and migration of on-premises workloads to GCP, ensuring minimal downtime and data integrity. Skills (competencies)
Posted 1 month ago
6.0 - 10.0 years
6 - 11 Lacs
Mumbai
Work from Office
Primary Skills Google Cloud Platform (GCP) Expertise in Compute (VMs, GKE, Cloud Run), Networking (VPC, Load Balancers, Firewall Rules), IAM (Service Accounts, Workload Identity, Policies), Storage (Cloud Storage, Cloud SQL, BigQuery), and Serverless (Cloud Functions, Eventarc, Pub/Sub). Strong experience in Cloud Build for CI/CD, automating deployments and managing artifacts efficiently. Terraform Skilled in Infrastructure as Code (IaC) with Terraform for provisioning and managing GCP resources. Proficient in Modules for reusable infrastructure, State Management (Remote State, Locking), and Provider Configuration . Experience in CI/CD Integration with Terraform Cloud and automation pipelines. YAML Proficient in writing Kubernetes manifests for deployments, services, and configurations. Experience in Cloud Build Pipelines , automating builds and deployments. Strong understanding of Configuration Management using YAML in GitOps workflows. PowerShell Expert in scripting for automation, managing GCP resources, and interacting with APIs. Skilled in Cloud Resource Management , automating deployments, and optimizing cloud operations. Secondary Skills CI/CD Pipelines GitHub Actions, GitLab CI/CD, Jenkins, Cloud Build Kubernetes (K8s) Helm, Ingress, RBAC, Cluster Administration Monitoring & Logging Stackdriver (Cloud Logging & Monitoring), Prometheus, Grafana Security & IAM GCP IAM Policies, Service Accounts, Workload Identity Networking VPC, Firewall Rules, Load Balancers, Cloud DNS Linux & Shell Scripting Bash scripting, system administration Version Control Git, GitHub, GitLab, Bitbucket
Posted 1 month ago
15.0 - 20.0 years
1 - 5 Lacs
Kolkata
Work from Office
Project Role : Infra Tech Support Practitioner Project Role Description : Provide ongoing technical support and maintenance of production and development systems and software products (both remote and onsite) and for configured services running on various platforms (operating within a defined operating model and processes). Provide hardware/software support and implement technology at the operating system-level across all server and network areas, and for particular software solutions/vendors/brands. Work includes L1 and L2/ basic and intermediate level troubleshooting. Must have skills : GCP Dataflow Good to have skills : DevOps ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationKey Responsibilities (This field is Mandatory Character limit 500)a. Assessment of the Solution Architects' design from Operations point of view and implementation of it along with end users' communication. Onboarding of new services keeping all different aspects in mind.b. Carrying out Day-to-day operations like execution of service requests, changes, Incidents, problems, demands.c. protecting SLA and KPI. Technical Experience (Mandatory Field, Character limit 500)a. Basic knowledge on UNIX commands, python scripting, Json, YAML, Macro driven excel.b. Indepth Knowledge and experience on several GCP services like VPC Network, network services, hybrid connecitvity, IAM & Admin, App Engine, Cloud Functions, Cloud Storage, Cloud Logging, GCP Organizations, gcloud commands, Gsuit.c. Also Good experience on Terraform and GITlab.Professional Experience (Mandatory Field, Character limit 300)Good in verbal, written communication and presentation; interacting with clients at varying levels.Good team player. Qualification 15 years full time education
Posted 1 month ago
4.0 - 9.0 years
11 - 19 Lacs
Chennai
Work from Office
Role & responsibilities Python, Dataproc, Airflow PySpark, Cloud Storage, DBT, DataForm, NAS, Pubsub, TERRAFORM, API, Big Query, Data Fusion, GCP, Tekton Preferred candidate profile Data Engineer in Python - GCP Location Chennai Only 4+ Years of Experience
Posted 1 month ago
5.0 - 10.0 years
11 - 16 Lacs
Bengaluru
Work from Office
: Job TitleSenior GCP Data Engineer Corporate TitleAssociate LocationBangalore, India Role Description Deutsche Bank has set for itself ambitious goals in the areas of Sustainable Finance, ESG Risk Mitigation as well as Corporate Sustainability. As Climate Change throws new Challenges and opportunities, Bank has set out to invest in developing a Sustainability Technology Platform, Sustainability data products and various sustainability applications which will aid Banks goals. As part of this initiative, we are building an exciting global team of technologists who are passionate about Climate Change, want to contribute to greater good leveraging their Technology Skillset in in Cloud / Hybrid Architecture. As part of this Role, we are seeking a highly motivated and experienced Senior GCP Data Engineer to join our team. In this role, you will play a critical role in designing, developing, and maintaining robust data pipelines that transform raw data into valuable insights for our organization. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design, develop, and maintain data pipelines using GCP services like Dataflow, Dataproc, and Pub/Sub. Develop and implement data ingestion and transformation processes using tools like Apache Beam and Apache Spark. Manage and optimize data storage solutions on GCP, including Big Query, Cloud Storage, and Cloud SQL. Implement data security and access controls using GCP's Identity and Access Management (IAM) and Cloud Security Command Center. Monitor and troubleshoot data pipelines and storage solutions using GCP's Stackdriver and Cloud Monitoring tools. Collaborate with data experts, analysts, and product teams to understand data needs and deliver effective solutions. Automate data processing tasks using scripting languages like Python. Participate in code reviews and contribute to establishing best practices for data engineering on GCP. Stay up to date on the latest advancements and innovations in GCP services and technologies. Your skills and experience 5+ years of experience as a Data Engineer or similar role. Proven expertise in designing, developing, and deploying data pipelines. In-depth knowledge of Google Cloud Platform (GCP) and its core data services (GCS, BigQuery, Cloud Storage, Dataflow, etc.). Strong proficiency in Python & SQL for data manipulation and querying. Experience with distributed data processing frameworks like Apache Beam or Apache Spark (a plus). Familiarity with data security and access control principles. Excellent communication, collaboration, and problem-solving skills. Ability to work independently, manage multiple projects, and meet deadlines Knowledge of Sustainable Finance / ESG Risk / CSRD / Regulatory Reporting will be a plus Knowledge of cloud infrastructure and data governance best practices will be a plus. Knowledge of Terraform will be a plus How well support you . . . About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 month ago
1.0 - 3.0 years
1 - 3 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Cloud Storage Engineer Job Title : Cloud Storage Engineer Location : Chennai, Hyderabad, Bangalore Experience : 1-3 Job Summary The Cloud Storage Engineer is responsible for designing, implementing, and maintaining scalable and secure cloud-based storage solutions. This role ensures high availability, performance, and data integrity across cloud platforms. Key Responsibilities Design and deploy cloud storage architectures (object, block, and file storage). Manage storage provisioning, performance tuning, and capacity planning. Implement backup, archival, and disaster recovery strategies. Monitor storage systems and troubleshoot issues. Ensure compliance with data governance and security policies. Collaborate with DevOps, infrastructure, and application teams to optimize storage usage. Automate storage management tasks using scripting and cloud-native tools. Required Skills Expertise in cloud platforms (AWS S3/EBS/EFS, Azure Blob/File Storage, Google Cloud Storage). Strong understanding of storage protocols (NFS, SMB, iSCSI). Experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation. Proficiency in scripting languages (Python, Bash, PowerShell). Knowledge of data lifecycle management and storage tiering. Qualifications Bachelors degree in Computer Science, Information Technology, or related field. 3+ years of experience in cloud storage engineering or related roles. Cloud certifications (e.g., AWS Certified Solutions Architect, Azure Storage Specialist) are a plus.
Posted 1 month ago
4.0 - 9.0 years
10 - 15 Lacs
Pune
Work from Office
MS Azure Infra (Must), PaaS will be a plus, ensuring solutions meet regulatory standards and manage risk effectively. Hands-On Experience using Terraform to design and deploy solutions (at least 5+ years), adhering to best practices to minimize risk and ensure compliance with regulatory requirements. Primary Skill AWS Infra along with PaaS will be an added advantage. Certification in Terraform is an added advantage. Certification in Azure and AWS is an added advantage. Can handle large audiences to present HLD, LLD, and ERC. Able to drive Solutions/Projects independently and lead projects with a focus on risk management and regulatory compliance. Secondary Skills Amazon Elastic File System (EFS) Amazon Redshift Amazon S3 Apache Spark Ataccama DQ Analyzer AWS Apache Airflow AWS Athena Azure Data Factory Azure Data Lake Storage Gen2 (ADLS) Azure Databricks Azure Event Hub Azure Stream Analytics Azure Synapse Analytics BigID C++ Cloud Storage Collibra Data Governance (DG) Collibra Data Quality (DQ) Data Lake Storage Data Vault Modeling Databricks DataProc DDI Dimensional Data Modeling EDC AXON Electronic Medical Record (EMR) Extract, Transform & Load (ETL) Financial Services Logical Data Model (FSLDM) Google Cloud Platform (GCP) BigQuery Google Cloud Platform (GCP) Bigtable Google Cloud Platform (GCP) Dataproc HQL IBM InfoSphere Information Analyzer IBM Master Data Management (MDM) Informatica Data Explorer Informatica Data Quality (IDQ) Informatica Intelligent Data Management Cloud (IDMC) Informatica Intelligent MDM SaaS Inmon methodology Java Kimball Methodology Metadata Encoding & Transmission Standards (METS) Metasploit Microsoft Excel Microsoft Power BI NewSQL noSQL OpenRefine OpenVAS Performance Tuning Python R RDD Optimization SaS SQL Tableau Tenable Nessus TIBCO Clarity
Posted 1 month ago
3.0 - 6.0 years
4 - 6 Lacs
Mumbai
Work from Office
What you will do for Sectona Key Responsibilities: Cloud Infrastructure Management: Design, implement, and maintain cloud infrastructures on AWS. Manage compute resources, storage, and networking components in AWS. Provision, configure, and monitor EC2 instances, S3 storage, and VPCs. Operating System Management: Configure and manage Windows and Unix-based VMs (Linux/Ubuntu). Perform patch management, security configurations, and system upgrades. Ensure high availability and performance of cloud-hosted environments. Active Directory Integration: Implement and manage Active Directory (AD) services, including AWS Directory Service, within the cloud environment. Integrate on-prem AD with AWS using AWS Managed AD or AD Connector. Networking: Design and manage secure network architectures, including VPCs, subnets, VPNs, and routing configurations. Implement network security best practices (firewalls, security groups, NACLs). Troubleshoot and resolve network connectivity issues, ensuring optimal network performance. Storage Solutions: Implement scalable storage solutions using AWS S3, EBS, and Glacier. Manage backup and recovery strategies for cloud-hosted environments. Database Management: Manage relational (RDS, Aurora) and NoSQL (DynamoDB) databases in the cloud. Ensure database performance, security, and high availability. Load Balancer & Auto-scaling: Configure and manage AWS Elastic Load Balancers (ELB) to distribute traffic across instances. Implement Auto Scaling policies to ensure elasticity and high availability of applications. Performance Tuning: Monitor system performance and apply necessary optimizations. Identify and resolve performance bottlenecks across compute, network, storage, and database layers. Security & Compliance: Implement security best practices in line with AWS security standards (IAM, encryption, security groups, etc.). Regularly audit cloud environments for compliance with internal and external security regulations. Skills and experience you require Bachelors degree in Computer Science, Information Technology, or a related field (or equivalent work experience). 4+ years of hands-on experience with AWS cloud platforms, including EC2, S3, VPC, RDS, Lambda, and IAM. Proficient in managing both Windows and Unix/Linux servers in a cloud environment. Strong experience with Active Directory integration in a cloud infrastructure. Solid understanding of cloud networking, VPC design, and security groups. Knowledge of cloud storage solutions like EBS, S3, and Glacier. Experience with cloud-based databases- RDS (MySQL and MS SQL Server). Familiarity with load balancing technologies (Elastic Load Balancer) and Auto Scaling in AWS. Experience with cloud monitoring tools such as AWS CloudWatch, CloudTrail, or third-party tools. Familiarity with cloud services in Azure (e.g., VMs, Azure AD, Azure Storage) and GCP
Posted 1 month ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Job Titles: Technology Service Specialist Corporate title: Associate Location: Pune, India Role Description Our team is part of the area Technology, Data, and Innovation (TDI) Private Bank. TDI PB Germany Service Operations provides 2nd Level Application Support for business applications used in branches, by mobile sales or via internet. The department is overall responsible for the stability of the applications. Incident Management and Problem Management are the main processes that account for the required stability. In-depth application knowledge and understanding of the business processes that the applications support are our main assets. Within TDI, Partnerdata is the central client reference data system in Germany. As a core banking system, many banking processes and applications are integrated and communicate via >2k interfaces. With the partnership with Google Cloud (GCP), a bunch of applications and functionalities were migrated to GCP from where they will be operated and worked upon in terms of further development. Besides to the maintenance and the implementation of new requirements, the content focus also lies on the regulatory topics surrounding a partner/ client. We are looking for reinforcements for this contemporary and emerging Cloud area of application. What we'll offer you As part of our flexible scheme, here are just some of the benefits that you'll enjoy Your key responsibilities Ensures that the Service Operations team provides optimum service level to the business lines it supports. Takes overall responsibility for the resolution of incidents and problems within the team. Oversees the resolution of complex incidents. Ensure that Analysts apply the right problem-solving techniques and processes. Assists in managing business stakeholder relationships. Assists in defining and managing OLAs with relevant stakeholders. Ensures that the team understands OLAs and resources appropriately and are aligned to business SLAs. Ensures relevant Client Service teams are informed of progress on incidents, where necessary. Ensures that defined divisional Production Management service operations and support processes are adhered to by the team. Make improvement recommendations where appropriate. Prepares for and, if requested, manages steam review meetings. Makes suggestions for continual service improvement. Manages escalations by working with Client Services and other Service Operations Specialists and relevant functions to accurately resolve escalated issues quickly. Observes areas requiring monitoring, reporting and improvement. Identifies required metrics and ensure they are established, monitored and improved where appropriate. Continuously seeks to improve team performance. Participates in team training events, where appropriate. Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Mentors and coaches Production Management Analysts within the team by providing career development and counselling, as needed. Assists Production Management Analysts in setting performance targets; and manages performance against them. Identifies team bottlenecks (obstacles) and takes appropriate actions to eliminate them. Level 3 or Advanced support for technical infrastructure components Evaluation of new products including prototyping and recommending new products including automation Specify/select tools to enhance operational support. Champion activities and establishes best practices in specialist area, working to implement best of breed test practices and processes in area of profession. Defines and implements best practices, solutions and standards related to their area of expertise Builds captures and manages the transfers of knowledge across the Service Operations organization Fulfil Service Requests addressed to L2 Support Communicate with Service Desk function, other L2 and L3 units Incident-, Change-, Problem Management and Service Request Fulfillment Solving incidents of customers in time Log file analysis and root cause analysis Participating in major incident calls for high priority incidents Resolving inconsistencies of data replication Supporting Problem management to solve Application issues Creating/Executing Service Requests for Customers, provide Reports and Statistics Escalating and informing about incidents in a timely manner Documentation of Tasks, Incidents, Problems and Changes Documentation in Service Now Documentation in Knowledgebases Improving monitoring of the application Adding requests for Monitoring Adding alerts and thresholds for occurring issues Implementing automation of tasks Your skills and experience Service Operations Specialist experience within a global operations context Extensive experience of supporting complex application and infrastructure domains Experience managing and mentoring Service Operations teams Broad ITIL/best practice service context within a real time distributed environment Experience managing relationships across multiple disciplines and time zones Ability to converse clearly with internal and external staff via telephone and written communication Good knowledge on interface technologies and communication protocols Be willing to work in DE business hours Clear and concise documentation in general and especially a proper documentation of the current status of incidents, problems and service requests in the Service Management tool Thorough and precise work style with a focus on high quality Distinct service orientation High degree of self-initiative Bachelors Degree from an accredited college or university with a concentration in IT or Computer Science related discipline (equivalent diploma or technical faculty) ITIL certification and experience with ITSM tool ServiceNow (preferred) Know How on Banking domain and preferably regulatory topics around know your customer processes Experience with databases like BigQuery and good understanding of Big Data and GCP technologies Experience in at least: GitHub, Terraform, Cloud SQL, Cloud Storage, Dataproc, Dataflow Architectural skills for big data solutions, especially interface architecture You can work very well in teams but also independent and you are constructive and target oriented Your English skills are very good and you can both communicate professionally but also informally in small talks with the team Area specific tasks / responsibilities: Handling Incident- /Problem Management und Service Request Fulfilment Analyze Incidents, which are addressed from 1st Level Support Analyze occurred errors out of the batch processing and interfaces of related systems Resolution or Workaround determination and implementation Supporting the resolution of high impact incidents on our services, including attendance at incident bridge calls Escalate incident tickets and working with members of the team and Developers Handling Service Request eg. Reports for Business and Projects Providing resolution for open problems, or ensuring that the appropriate parties have been tasked with doing so Supporting the handover from new Projects / Applications into Production Services with Service Transition before Go Life Phase Supporting Oncall-Support activities
Posted 1 month ago
10.0 - 15.0 years
30 - 40 Lacs
Noida, Pune, Bengaluru
Hybrid
Strong Experience in Big Data- Data Modelling, Design, Architecting & Solutioning Understands programming language like SQL, Python, R-Scala. Good Python skills, - Experience from data visualisation tools such as Google Data Studio or Power BI Knowledge in A/B Testing, Statistics, Google Cloud Platform, Google Big Query, Agile Development, DevOps, Date Engineering, ETL Data Processing Strong Migration experience of production Hadoop Cluster to Google Cloud. Good To Have:- Expert in Big Query, Dataproc, Data Fusion, Dataflow, Bigtable, Fire Store, CloudSQL, Cloud Spanner, Google Cloud Storage, Cloud Composer, Cloud Interconnect, Etc
Posted 1 month ago
3.0 - 7.0 years
10 - 20 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Salary: 8 to 24 LPA Exp: 3 to 7 years Location: Gurgaon/Pune/Bengalore Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engineer, Candidate will work on the assignments for one of our Utilities clients. Collaborating with cross-functional teams and stakeholders involves gathering data requirements, aligning business goals, and translating them into scalable data solutions. The role includes working closely with data analysts, scientists, and business users to understand needs, designing robust data pipelines, and ensuring data is accessible, reliable, and well-documented. Regular communication, iterative feedback, and joint problem-solving are key to delivering high-impact, data-driven outcomes that support organizational objectives. This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.
Posted 1 month ago
6.0 - 11.0 years
3 - 6 Lacs
Noida
Work from Office
We are looking for a skilled Snowflake Ingress/Egress Specialist with 6 to 12 years of experience to manage and optimize data flow into and out of our Snowflake data platform. This role involves implementing secure, scalable, and high-performance data pipelines, ensuring seamless integration with upstream and downstream systems, and maintaining compliance with data governance policies. Roles and Responsibility Design, implement, and monitor data ingress and egress pipelines in and out of Snowflake. Develop and maintain ETL/ELT processes using tools like Snowpipe, Streams, Tasks, and external stages (S3, Azure Blob, GCS). Optimize data load and unload processes for performance, cost, and reliability. Coordinate with data engineering and business teams to support data movement for analytics, reporting, and external integrations. Ensure data security and compliance by managing encryption, masking, and access controls during data transfers. Monitor data movement activities using Snowflake Resource Monitors and Query History. Job Bachelor's degree in Computer Science, Information Systems, or a related field. 6-12 years of experience in data engineering, cloud architecture, or Snowflake administration. Hands-on experience with Snowflake features such as Snowpipe, Streams, Tasks, External Tables, and Secure Data Sharing. Proficiency in SQL, Python, and data movement tools (e.g., AWS CLI, Azure Data Factory, Google Cloud Storage Transfer). Experience with data pipeline orchestration tools such as Apache Airflow, dbt, or Informatica. Strong understanding of cloud storage services (S3, Azure Blob, GCS) and working with external stages. Familiarity with network security, encryption, and data compliance best practices. Snowflake certification (SnowPro Core or Advanced) is preferred. Experience with real-time streaming data (Kafka, Kinesis) is desirable. Knowledge of DevOps tools (Terraform, CI/CD pipelines) is a plus. Strong communication and documentation skills are essential.
Posted 1 month ago
10.0 - 20.0 years
12 - 22 Lacs
Pune
Work from Office
Your key responsibilities Ensures that the Service Operations team provides optimum service level to the business lines it supports. Takes overall responsibility for the resolution of incidents and problems within the team. Oversees the resolution of complex incidents. Ensure that Analysts apply the right problem-solving techniques and processes. Assists in managing business stakeholder relationships. Assists in defining and managing OLAs with relevant stakeholders. Ensures that the team understands OLAs and resources appropriately and are aligned to business SLAs. Ensures relevant Client Service teams are informed of progress on incidents, where necessary. Ensures that defined divisional Production Management service operations and support processes are adhered to by the team. Make improvement recommendations where appropriate. Prepares for and, if requested, manages steam review meetings. Makes suggestions for continual service improvement. Manages escalations by working with Client Services and other Service Operations Specialists and relevant functions to accurately resolve escalated issues quickly. Observes areas requiring monitoring, reporting and improvement. Identifies required metrics and ensure they are established, monitored and improved where appropriate. Continuously seeks to improve team performance. Participates in team training events, where appropriate. Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Mentors and coaches Production Management Analysts within the team by providing career development and counselling, as needed. Assists Production Management Analysts in setting performance targets; and manages performance against them. Identifies team bottlenecks (obstacles) and takes appropriate actions to eliminate them. Level 3 or Advanced support for technical infrastructure components Evaluation of new products including prototyping and recommending new products including automation Specify/select tools to enhance operational support. Champion activities and establishes best practices in specialist area, working to implement best of breed test practices and processes in area of profession. Defines and implements best practices, solutions and standards related to their area of expertise Builds captures and manages the transfers of knowledge across the Service Operations organization Fulfil Service Requests addressed to L2 Support Communicate with Service Desk function, other L2 and L3 units Incident-, Change-, Problem Management and Service Request Fulfillment Solving incidents of customers in time Log file analysis and root cause analysis Participating in major incident calls for high priority incidents Resolving inconsistencies of data replication Supporting Problem management to solve Application issues Creating/Executing Service Requests for Customers, provide Reports and Statistics Escalating and informing about incidents in a timely manner Documentation of Tasks, Incidents, Problems and Changes Documentation in Service Now Documentation in Knowledgebases Improving monitoring of the application Adding requests for Monitoring Adding alerts and thresholds for occurring issues Implementing automation of tasks Your skills and experience Service Operations Specialist experience within a global operations context Extensive experience of supporting complex application and infrastructure domains Experience managing and mentoring Service Operations teams Broad ITIL/best practice service context within a real time distributed environment Experience managing relationships across multiple disciplines and time zones Ability to converse clearly with internal and external staff via telephone and written communication Good knowledge on interface technologies and communication protocols Be willing to work in DE business hours Clear and concise documentation in general and especially a proper documentation of the current status of incidents, problems and service requests in the Service Management tool Thorough and precise work style with a focus on high quality Distinct service orientation High degree of self-initiative Bachelors Degree from an accredited college or university with a concentration in IT or Computer Science related discipline (equivalent diploma or technical faculty) ITIL certification and experience with ITSM tool ServiceNow (preferred) Know How on Banking domain and preferably regulatory topics around know your customer processes Experience with databases like BigQuery and good understanding of Big Data and GCP technologies Experience in at least: GitHub, Terraform, Cloud SQL, Cloud Storage, Dataproc, Dataflow Architectural skills for big data solutions, especially interface architecture You can work very well in teams but also independent and you are constructive and target oriented Your English skills are very good and you can both communicate professionally but also informally in small talks with the team Area specific tasks / responsibilities: Handling Incident- /Problem Management und Service Request Fulfilment Analyze Incidents, which are addressed from 1st Level Support Analyze occurred errors out of the batch processing and interfaces of related systems Resolution or Workaround determination and implementation Supporting the resolution of high impact incidents on our services, including attendance at incident bridge calls Escalate incident tickets and working with members of the team and Developers Handling Service Request eg. Reports for Business and Projects Providing resolution for open problems, or ensuring that the appropriate parties have been tasked with doing so Supporting the handover from new Projects / Applications into Production Services with Service Transition before Go Life Phase Supporting Oncall-Support activities
Posted 1 month ago
5.0 - 8.0 years
6 - 12 Lacs
Chennai
Work from Office
Design and develop scalable cloud-based data solutions on Google Cloud Platform (GCP) Build and optimize Python-based ETL pipelines and data workflows Work with NoSQL databases (Bigtable, Firestore, MongoDB) for high-performance data management
Posted 1 month ago
3.0 - 8.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Should have 3 to 8 years experience in C# and WPF/Winforms Should have good knowledge of multi-threaded application development. Should have strong knowledge of OOPS concepts, SOLID principles and Design patterns. Should have good coding practices and able to design and develop modules independently with minimal supervision. PERKS AND BENEFITS Best in Industry Education Qualification B.Tech/B.E. in Any Specialization M.Tech in Any Specialization Doctorate Not Required, Any Doctorate
Posted 1 month ago
6.0 - 10.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Demonstrated expertise in backend architecture, showcasing a deep understanding of design patterns and industry best practices. In-depth knowledge of CS data structures and algorithms. Specialized knowledge and application of Golang, emphasizing its relevance in contemporary software development. Advanced skills in multi-threading and concurrent programming, contributing to optimize system performance. Ready work as an individual contributor (IC) role. Proficient in Golang development with a dedicated focus on Golang programming. Experience in crafting applications that seamlessly run on Linux operating systems. Demonstrated proficiency in adhering to industry best practices, ensuring the delivery of robust and scalable backend solutions. Education Qualification BE/BTech/M Tech in CS Specialization
Posted 1 month ago
6.0 - 11.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Demonstrated expertise in backend architecture, showcasing a deep understanding of design patterns and industry best practices. Education Qualification BE/BTech/M Tech in CS Specialization Job Highlights Specialized knowledge and application of .NET Core, emphasizing its relevance in contemporary software development. Advanced skills in multi-threading and concurrent programming, contributing to optimize system performance. Ready work as an individual contributor (IC) role.. Must and convey while screening profiles. Proficient in .NET Framework development with a dedicated focus on C# programming. Experience in crafting applications that seamlessly run on Linux operating systems. Demonstrated proficiency in adhering to industry best practices, ensuring the delivery of robust and scalable backend solutions Bangalore Whitefield About Company About Company IDrive Software India Pvt Ltd. is a privately held company that specializes in cloud storage, online backup, file syncing, remote access, compliance, and related technologies. We primarily serve the consumer, small business, and enterprise markets. With over 500 PetaBytes of storage, we are a premier in cloud-based service providers. Our expertise lies in providing a host of Internet based data solutions including Online Storage, Online Backup, Collaboration, Sharing, and Remote Access. At IDrive Software, we have an exceptional, self-motivated, and skilled team. A wide range of highly reviewed cutting edge applications is a testimony to the dedication and skills of our team. Company Info Website: Address: B-903, 9TH FLOOR, BRIGADE TECH PARK, BRIGADE TECH PARK, WHITEFIELD ROAD, WHITEFIELD, BANGALORE, Karnataka, India Postings for IDrive Software (India) Private Limited.
Posted 1 month ago
2.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Candidate must have min 2-8 years of exp in Voice/ Phone support (preferably Technical Support Process.) Diagnosis resolving application issues Provides remote assistance for resolving app issues Responsible for providing diagnostic technical support related to installation / configuration / issue troubleshooting Applies diagnostic techniques to identify problems, investigate causes and recommend solutions to correct failures Should have knowledge of networking concepts Should be open to work in rotational / night shifts in a 24/7 environment Should have excellent verbal communication skills PERKS AND BENEFITS Best in Industry Education Qualification UG - Any Graduate PG - Any Postgraduate, Post Graduation Not Required Doctorate - Any Doctorate, Doctorate Not Required
Posted 1 month ago
6.0 - 11.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Demonstrated expertise in backend architecture, showcasing a deep understanding of design patterns and industry best practices. In-depth knowledge of CS data structures and algorithms. Specialized knowledge and application of Golang, emphasizing its relevance in contemporary software development. Advanced skills in multi-threading and concurrent programming, contributing to optimize system performance. Ready work as an individual contributor (IC) role. Proficient in Golang development with a dedicated focus on Golang programming. Experience in crafting applications that seamlessly run on Linux operating systems. Demonstrated proficiency in adhering to industry best practices, ensuring the delivery of robust and scalable backend solutions. Education Qualification BE/BTech/M Tech in CS Specialization
Posted 1 month ago
6.0 - 11.0 years
18 - 33 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
We are looking for a cloud engineer to join our team and work with our engineering team to optimize, implement and maintain an organization's cloud-based systems.
Posted 1 month ago
5.0 - 8.0 years
12 - 16 Lacs
Hubli, Mangaluru, Mysuru
Work from Office
Comcast brings together the best in media and technology We drive innovation to create the world's best entertainment and online experiences As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast Job Summary Responsible for the designing, implementing, maintaining, security and repairs of an organization's database In addition to maintaining the database support tools, database tables and dictionaries, recovery and back up procedures Works with moderate guidance in own area of knowledge Job Description ABOUT THE ROLE: Universal seek a talented HANA DBA to join our growing team supporting Corporate Business Solutions The Sr SAP HANA Database Administrator is responsible for critical aspects of database administration including installation, configuration, upgrades, capacity/resource planning, performance tuning, backup and recovery strategy, promoting process improvement, problem solving, adhering to security policies and managing clusters of DB servers Experience is required for cloning production data to development/test environments, and application optimization including query optimization The candidate must have SAP HANA Database experience along with SAP BASIS Oracle, MS SQL, and SAP IQ experience would also be a plus Requirements Bachelors degree in technology or equivalent degree preferred SAP HANA (S/4 HANA, HANA 2 0) minimum 3 yearsexperience is required Oracle (Oracle 12c/Oracle 19c) , SQL Server and SAP IQ experience are a plus Demonstrated experience in SAP HANA database administration using Oracle Linux, Red Hat and SUSE Linux Experience with high-availability and disaster recovery and HSR processes in on-prem and AWS Cloud environments Must have been part of teams that have upgraded and or migrated DBs from on-prem to Cloud Must understand Cloud Storage, file systems and BACKINT processes Deep Knowledge of database technologies and practices Knowledge of database trends and tools Excellent communication skills both written and verbal Ability to work well under pressure; manage tight deadlines and situations where conflicting priorities arise Be highly accountable, possessing a can-do attitude and a strong results orientation Excellent active listening skills Ability to clearly articulate messages to a variety of audiences Strong analytical and critical thinking skills Problem solving and root cause identification skills Excellent organization and time management skills Excellent understanding of various databases with proven track record in establishing, managing database processes and activities within a global organization Ability to work with offshore/overseas teams Planning, executing installs/upgrades/patching Work closely with the Project Managers, Process Leads, and other stakeholders to ensure business requirements have been met Maintain DB stability: backups/restores; monitor disk allocations; troubleshoot performance issues; reorg tables/indexes; parameter changes; monitor overall utilization Plan DB changes within the framework of the Service Now change management procedures Ensure audit information is collected as needed for change tickets Define and collect audit information for the review by Internal and external auditors Provide daily, weekly, and monthly activity reports describing activities accomplished in a timely manner Develop and maintain Knowledge articles for standard work Be willing to learn and support our HANA environment on premise and in the AWS cloud Preferred Qualifications Minimum of a 4-year IT related degree and 3-4 years of production systems administration/support, or equivalent work experience Ability to use a wide variety of open-source technologies and tools Ability to code and script Experience with systems and IT operations Strong grasp of automation tools Comfort with collaboration, open communication and reaching across functional borders Comcast is proud to be an equal opportunity workplace We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus Additionally, Comcast provides best-in-class Benefits to eligible employees We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most Thats why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality to help support you physically, financially and emotionally through the big milestones and in your everyday life Please visit the compensation and benefits summary on our careers site for more details Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience Relevant Work Experience 2-5 Years
Posted 1 month ago
3.0 - 7.0 years
37 - 40 Lacs
Bengaluru
Work from Office
: Job TitleDevOps Engineer, AS LocationBangalore, India Role Description Deutsche Bank has set for itself ambitious goals in the areas of Sustainable Finance, ESG Risk Mitigation as well as Corporate Sustainability. As Climate Change throws new Challenges and opportunities, Bank has set out to invest in developing a Sustainability Technology Platform, Sustainability data products and various sustainability applications which will aid Banks goals. As part of this initiative, we are building an exciting global team of technologists who are passionate about Climate Change, want to contribute to greater good leveraging their Technology Skillset in Cloud / Hybrid Architecture. As part of this Role, we are seeking a highly skilled and experienced DevOps Engineer to join our growing team. In this role, you will play a pivotal role in managing and optimizing cloud infrastructure, facilitating continuous integration and delivery, and ensuring system reliability. What well offer you . 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Create, implement, and oversee scalable, secure, and cost-efficient cloud infrastructures on Google Cloud Platform (GCP). Utilize Infrastructure as Code (IaC) methodologies with tools such as Terraform, Deployment Manager, or alternatives. Implement robust security measures to ensure data access control and compliance with regulations. Adopt security best practices, establish IAM policies, and ensure adherence to both organizational and regulatory requirements. Set up and manage Virtual Private Clouds (VPCs), subnets, firewalls, VPNs, and interconnects to facilitate secure cloud networking. Establish continuous integration and continuous deployment (CI/CD) pipelines using Jenkins, GitHub Actions, or comparable tools for automated application deployments. Implement monitoring and alerting solutions through Stackdriver (Cloud Operations), Prometheus, or other third-party applications. Evaluate and optimize cloud expenditures by utilizing committed use discounts, autoscaling features, and resource rightsizing. Manage and deploy containerized applications through Google Kubernetes Engine (GKE) and Cloud Run. Deploy and manage GCP databases like Cloud SQL, BigQuery. Your skills and experience Minimum of 5+ years of experience in DevOps or similar roles with hands-on experience in GCP. In-depth knowledge of Google Cloud services (e.g., GCE, GKE, Cloud Functions, Cloud Run, Pub/Sub, BigQuery, Cloud Storage) and the ability to architect, deploy, and manage cloud-native applications. Proficient in using tools like Jenkins, GitLab, Terraform, Ansible, Docker, Kubernetes. Experience with Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or GCP-native Deployment Manager. Solid understanding of security protocols, IAM, networking, and compliance requirements within cloud environments. Strong problem-solving skills and ability to troubleshoot cloud-based infrastructure. Google Cloud certifications (e.g., Associate Cloud Engineer, Professional Cloud Architect, or Professional DevOps Engineer) are a plus. How well support you . . . About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 month ago
5.0 - 10.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Experience 8+ years Data Engineering experience. 3+ years experience of cloud platform services (preferably GCP) 2+ years hands-on experience on Pentaho. Hands-on experience in building and optimizing data pipelines and data sets. Hands-on experience with data extraction and transformation tasks while taking care of data security, error handling and pipeline performance. Hands-on experience with relational SQL (Oracle, SQL Server or MySQL) and NoSQL databases . Advance SQL experience - creating, debugging Stored Procedures, Functions, Triggers and Object Types in PL/SQL Statements. Hands-on experience with programming languages - Java (mandatory), Go, Python. Hands-on experience in unit testing data pipelines. Experience in using Pentaho Data Integration (Kettle/Spoon) and debugging issues. Experience supporting and working with cross-functional teams in a dynamic environment. Technical Skills Programming & LanguagesJAVA Database TechOracle, Spanner, BigQuery, Cloud Storage Operating SystemsLinux Good knowledge and understanding of cloud based ETL framework and tools. Good understanding and working knowledge of batch and streaming data processing. Good understanding of the Data Warehousing architecture. Knowledge of open table and file formats (e.g. delta, hudi, iceberg, avro, parquet, json, csv) Strong analytic skills related to working with unstructured datasets. Excellent numerical and analytical skills. Responsibilities Design and develop various standard/reusable to ETL Jobs and pipelines. Work with the team in extracting the data from different data sources like Oracle, cloud storage and flat files. Work with database objects including tables, views, indexes, schemas, stored procedures, functions, and triggers. Work with team to troubleshoot and resolve issues in job logic as well as performance. Write ETL validations based on design specifications for unit testing Work with the BAs and the DBAs for requirements gathering, analysis, testing, metrics and project coordination.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough