Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
35 - 40 Lacs
Pune
Work from Office
: Job Title- Data Engineer (ETL, Big Data, Hadoop, Spark, GCP), AVP Location- Pune, India Role Description Engineer is responsible for developing and delivering elements of engineering solutions to accomplish business goals. Awareness is expected of the important engineering principles of the bank. Root cause analysis skills develop through addressing enhancements and fixes 2 products build reliability and resiliency into solutions through early testing peer reviews and automating the delivery life cycle. Successful candidate should be able to work independently on medium to large sized projects with strict deadlines. Successful candidates should be able to work in a cross application mixed technical environment and must demonstrate solid hands-on development track record while working on an agile methodology. The role demands working alongside a geographically dispersed team. The position is required as a part of the buildout of Compliance tech internal development team in India. The overall team will primarily deliver improvements in Com in compliance tech capabilities that are major components of the regular regulatory portfolio addressing various regulatory common commitments to mandate monitors. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Analyzing data sets and designing and coding stable and scalable data ingestion workflows also integrating into existing workflows Working with team members and stakeholders to clarify requirements and provide the appropriate ETL solution. Work as a senior developer for developing analytics algorithm on top of ingested data. Work as though senior developer for various data sourcing in Hadoop also GCP. Ensuring new code is tested both at unit level and system level design develop and peer review new code and functionality. Operate as a team member of an agile scrum team. Root cause analysis skills to identify bugs and issues for failures. Support Prod support and release management teams in their tasks. Your skills and experience: More than 6+ years of coding experience in experience and reputed organizations Hands on experience in Bitbucket and CI/CD pipelines Proficient in Hadoop, Python, Spark ,SQL Unix and Hive Basic understanding of on Prem and GCP data security Hands on development experience on large ETL/ big data systems .GCP being a big plus Hands on experience on cloud build, artifact registry ,cloud DNS ,cloud load balancing etc. Hands on experience on Data flow, Cloud composer, Cloud storage ,Data proc etc. Basic understanding of data quality dimensions like Consistency, Completeness, Accuracy, Lineage etc. Hands on business and systems knowledge gained in a regulatory delivery environment. Banking experience regulatory and cross product knowledge. Passionate about test driven development. Prior experience with release management tasks and responsibilities. Data visualization experience in tableau is good to have. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs.
Posted 1 week ago
4.0 - 9.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Target s investments in technology and innovation. We re the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests and we do so with a focus on diversity and inclusion, experimentation and continuous learning. At Target, we are gearing up for exponential growth and continuously expanding our guest experience. To support this expansion, Data Engineering is building robust warehouses and enhancing existing datasets to meet business needs across the enterprise. We are looking for talented individuals who are passionate about innovative technology, data warehousing and are eager to contribute to data engineering. . Position Overview Assess client needs and convert business requirements into business intelligence (BI) solutions roadmap relating to complex issues involving long-term or multi-work streams. Analyze technical issues and questions identifying data needs and delivery mechanisms Implement data structures using best practices in data modeling, ETL/ELT processes, Spark, Scala, SQL, database, and OLAP technologies Manage overall development cycle, driving best practices and ensuring development of high quality code for common assets and framework components Develop test-driven solutions and provide technical guidance and heavily contribute to a team of high caliber Data Engineers by developing test-driven solutions and BI Applications that can be deployed quickly and in an automated fashion. Manage and execute against agile plans and set deadlines based on client, business, and technical requirements Drive resolution of technology roadblocks including code, infrastructure, build, deployment, and operations Ensure all code adheres to development & security standards About you 4 year degree or equivalent experience 5+ years of software development experience preferably in data engineering/Hadoop development (Hive, Spark etc.) Hands on Experience in Object Oriented or functional programming such as Scala / Java / Python Knowledge or experience with a variety of database technologies (Postgres, Cassandra, SQL Server) Knowledge with design of data integration using API and streaming technologies (Kafka) as well as ETL and other data Integration patterns Experience with cloud platforms like Google Cloud, AWS, or Azure. Hands on Experience on BigQuery will be an added advantage Good understanding of distributed storage(HDFS, Google Cloud Storage, Amazon S3) and processing(Spark, Google Dataproc, Amazon EMR or Databricks) Experience with CI/CD toolchain (Drone, Jenkins, Vela, Kubernetes) a plus Familiarity with data warehousing concepts and technologies. Maintains technical knowledge within areas of expertise Constant learner and team player who enjoys solving tech challenges with global team. Hands on experience in building complex data pipelines and flow optimizations Be able to understand the data, draw insights and make recommendations and be able to identify any data quality issues upfront Experience with test-driven development and software test automation Follow best coding practices & engineering guidelines as prescribed Strong written and verbal communication skills with the ability to present complex technical information in a clear and concise manner to variety of audiences Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 1 week ago
3.0 - 5.0 years
32 - 40 Lacs
Pune
Work from Office
: Job TitleSenior Engineer, VP LocationPune, India Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the endto-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel.You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support." What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities The candidate is expected to Hands-on engineering lead involved in analysis, design, design/code reviews, coding and release activities Champion engineering best practices and guide/mentor team to achieve high performance. Work closely with Business stakeholders, Tribe lead, Product Owner, Lead Architect to successfully deliver the business outcomes. Acquire functional knowledge of the business capability being digitized/re-engineered. Demonstrate ownership, inspire others, innovative thinking, growth mindset and collaborate for success. Your skills and experience Minimum 15 years of IT industry experience in Full stack development Expert in Java, Spring Boot, NodeJS, ReactJS, Strong experience in Big data processing Apache Spark, Hadoop, Bigquery, DataProc, Dataflow etc Strong experience in Kubernetes, OpenShift container platform Experience in Data streaming i.e. Kafka, Pub-sub etc Experience of working on public cloud GCP preferred, AWS or Azure Knowledge of various distributed/multi-tiered architecture styles Micro-services, Data mesh, Integrationpatterns etc Experience on modern software product delivery practices, processes and tooling and BIzDevOps skills such asCI/CD pipelines using Jenkins, Git Actions etc Experience on leading teams and mentoring developers Key Skill: Java Spring Boot NodeJS SQL/PLSQL ReactJS Advantageous: Having prior experience in Banking/Finance domain Having worked on hybrid cloud solutions preferably using GCP Having worked on product development How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 week ago
5.0 - 8.0 years
2 - 6 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1963_JOB Date Opened 17/05/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Neo4j GraphDB Developer City Mumbai Province Maharashtra Country India Postal Code 400001 Number of Positions 5 Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
Project Role : Cloud Services Engineer Project Role Description : Act as liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Must have skills : Managed File Transfer Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Services Engineer, you will act as a liaison between the client and Accenture operations teams for support and escalations. You will communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Ensure effective communication between client and operations teams. Analyze service delivery health and address performance issues. Conduct performance meetings to share data and trends. Professional & Technical Skills: Must To Have Skills:Proficiency in Managed File Transfer. Strong understanding of cloud orchestration and automation. Experience in SLA management and performance analysis. Knowledge of IT service delivery and escalation processes. Additional Information: The candidate should have a minimum of 5 years of experience in Managed File Transfer. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 1 week ago
5.0 - 10.0 years
12 - 17 Lacs
Hyderabad, Pune
Hybrid
Role 1 - GCP Data Engineer Must have skills /Mandatory Skills GCP, Big Query, Dataflow, Cloud Composer Role 2. Big Data Engineer Must have skills /Mandatory Skills -Big Data, PySpark, Scala, Python Role 3 - GCP DevOps Engineer Must have skills /Mandatory Skills – GCP DevOps Experience Range – 5+ Years Location – Only Pune & Hyderabad, If you are applying from outside Pune or Hyd, then you have to relocate to Pune or Hyd . Work Mode – Min 2 days are mandatory to work from home. Salary – 12-16 LPA Point to be remember Pls fill the Candidate Summary Sheet. Not Considering more than 30 days’ notice period. Highlights of this role. It’s a long term role. High Possibility of conversion within 6 Months or After 6 months ( if you perform well). Interview -Total 2 rounds of Interview ( Both Virtual), but one face to face meeting is mandatory @ any location - Pune/Hyderabad /Bangalore /Chennai. UAN Verification will be done in Background Check. Any overlapping in past employment will eliminate you. Last 4 Years Continuous PF deduction is mandatory. One face to face meeting is mandatory, Otherwise we can’t onboard you. Client Company – One of Leading Technology Consulting Payroll Company – One of Leading IT Services & Staffing Company ( This company has a presence in India, UK, Europe , Australia , New Zealand, US, Canada, Singapore, Indonesia, and Middle east. Do not change the subject line or do not create new email while sharing /applying for this position. Pls reply on this email thread only. Role 1 - GCP Data Engineer Must have skills /Mandatory Skills – GCP, Big Query, Dataflow, Cloud Composer About the Role: We are seeking a highly skilled and passionate GCP Data Engineer to join our growing data team. In this role, you will be instrumental in designing, building, and maintaining scalable and robust data pipelines and solutions on Google Cloud Platform (GCP). You will work closely with data scientists, analysts, and other stakeholders to translate business requirements into efficient data architectures, enabling data-driven decision-making across the organization. Qualifications: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related quantitative field. Min 5+ years of experience (e.g., 3-8 years) as a Data Engineer, with a strong focus on Google Cloud Platform (GCP). Mandatory hands-on experience with core GCP data services: BigQuery (advanced SQL, data modeling, query optimization) Dataflow (Apache Beam, Python/Java SDK) Cloud Composer / Apache Airflow for workflow orchestration Cloud Storage (GCS) Cloud Pub/Sub for messaging/streaming Strong programming skills in Python (preferred) or Java/Scala for data manipulation and pipeline development. Proficiency in SQL and experience with relational and NoSQL databases. Experience with data warehousing concepts, ETL/ELT processes, and data modeling techniques. Understanding of distributed systems and big data technologies (e.g., Spark, Hadoop concepts, Kafka). Familiarity with CI/CD practices and tools. Role 2. Big Data Engineer Must have skills /Mandatory Skills -Big Data, PySpark, Scala, Python About the Role: We are looking for an experienced and passionate Big Data Engineer to join our dynamic team. In this role, you will be responsible for designing, building, and maintaining scalable, high-performance data processing systems and pipelines capable of handling vast volumes of structured and unstructured data. You will play a crucial role in enabling our data scientists, analysts, and business teams to derive actionable insights from complex datasets. Qualifications : bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field. Min 5 years of proven experience as a Big Data Engineer or a similar role. Extensive hands-on experience with Apache Spark (PySpark, Scala) for data processing. Strong expertise in the Hadoop ecosystem (HDFS, Hive, MapReduce). Proficiency in Python and/or Scala/Java. Solid SQL skills and experience with relational databases. Experience designing and building complex ETL/ELT pipelines. Familiarity with data warehousing concepts and data modeling techniques (star schema, snowflake, data vault). Understanding of distributed computing principles. Excellent problem-solving, analytical, and communication skills Role 3r - GCP DevOps Engineer Must have skills /Mandatory Skills – GCP DevOps We are seeking a highly motivated and experienced GCP DevOps Engineer to join our innovative engineering team. You will be responsible for designing, implementing, and maintaining robust, scalable, and secure cloud infrastructure and automation pipelines on Google Cloud Platform (GCP). This role involves working closely with development, operations, and QA teams to streamline the software delivery lifecycle, enhance system reliability, and promote a culture of continuous improvement. Qualifications: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 5 years of experience in a DevOps or SRE role, with significant hands-on experience on Google Cloud Platform (GCP). Strong expertise in core GCP services relevant to DevOps: Compute Engine, GKE, Cloud SQL, Cloud Storage, VPC, Cloud Load Balancing, IAM. Proficiency with Infrastructure as Code (IaC) tools, especially Terraform. Extensive experience in designing and implementing CI/CD pipelines using tools like Cloud Build, Jenkins, or GitLab CI. Hands-on experience with containerization (Docker) and container orchestration (Kubernetes/GKE). Strong scripting skills in Python and Bash/Shell. Experience with monitoring and logging tools (Cloud Monitoring, Prometheus, Grafana, ELK stack). Solid understanding of networking concepts (TCP/IP, DNS, Load Balancers, VPNs) in a cloud environment. Familiarity with database concepts and experience managing cloud databases (e.g., Cloud SQL, Firestore). *** Mandatory to share ***Candidate Summary Sheet*** Interested parties can share their resume at (shant@harel-consulting.com) along with below details Applying for which role ( Pls mention the role name)- Your Name – Contact NO – Email ID – Do you have valid passport – Total Experience – Role 1 . Experience in GCP - Experience in Big Query - Experience in Data Flow - Experience in Cloud Composer – Experience in Apache Airflow – Experience in Python OR Java OR Scala and how much – Role 2nd Experience in Big data- Experience in Hive – Experience in Python OR Java OR Scala and how much – Experience in Pyspark- Role 3. Experience in GCP Devops – Experience in Python Current CTC – Expected CTC – What is your notice period in your current Company- Are you currently working or not- If not working then when you have left your last company – Current location – Preferred Location – It’s a Contract to Hire role, Are you ok with that- Highest Qualification – Current Employer (Payroll Company Name) Previous Employer (Payroll Company Name)- 2nd Previous Employer (Payroll Company Name) – 3rd Previous Employer (Payroll Company Name)- Are you holding any Offer – Are you Expecting any offer - Are you open to consider Contract to Hire role , It’s a C2H Role- PF Deduction is happening in Current Company – PF Deduction happened in 2nd last Employer- PF Deduction happened in 3 last Employer – Latest Photo - If incase you are working with a company whose employee strength is less than 2000 employees, than its mandatory to share UAN Service history. BR Shantpriya Harel Consulting shant@harel-consulting.com 9582718094
Posted 1 week ago
6.0 - 11.0 years
18 - 25 Lacs
Hyderabad
Work from Office
SUMMARY Data Modeling Professional Location Hyderabad/Pune Experience: The ideal candidate should possess at least 6 years of relevant experience in data modeling with proficiency in SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Key Responsibilities: Develop and configure data pipelines across various platforms and technologies. Write complex SQL queries for data analysis on databases such as SQL Server, Oracle, and HIVE. Create solutions to support AI/ML models and generative AI. Work independently on specialized assignments within project deliverables. Provide solutions and tools to enhance engineering efficiencies. Design processes, systems, and operational models for end-to-end execution of data pipelines. Preferred Skills: Experience with GCP, particularly Airflow, Dataproc, and Big Query, is advantageous. Requirements Requirements: Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to deliver high-quality materials against tight deadlines. Effective under pressure with rapidly changing priorities. Note: The ability to communicate efficiently at a global level is paramount. --- Minimum 6 years of experience in data modeling with SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Proficiency in writing complex SQL queries for data analysis. Experience with GCP, particularly Airflow, Dataproc, and Big Query, is an advantage. Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to work effectively under pressure with rapidly changing priorities.
Posted 2 weeks ago
2.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact! IBM’s Cloud Services are focused on supporting clients on their cloud journey across any platform to achieve their business goals. It encompasses Cloud Advisory, Architecture, Cloud Native Development, Application Portfolio Migration, Modernization, and Rationalization as well as Cloud Operations. Cloud Services supports all public/private/hybrid Cloud deployments: IBM Bluemix/IBM Cloud/Red Hat/AWS/ Azure/Google and client private environments. Cloud Services has the best Cloud developer architect Complex SI, Sys Ops and delivery talent delivered through our GEO CIC Factory model. As a member of our Cloud Practice you will be responsible for defining and implementing application cloud migration, modernisation and rationalisation solutions for clients across all sectors. You will support mobilisation and help to lead the quality of our programmes and services, liaise with clients and provide consulting services including: Create cloud migration strategies; defining delivery architecture, creating the migration plans, designing the orchestration plans and more. Assist in creating and executing of migration run books Evaluate source cloud (Physical Virtual and Cloud) and target Workloads. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud FunctionCloud data engineers with GCP PDE certification and working experience with GCP. Building end to end data pipelines in GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud Function Experience in logging and monitoring of GCP services and Experience in Terraform and infrastructure automation. Expertise in Python coding language Develops data engineering solutions on Google Cloud ecosystem and Supports and maintains data engineering solutions on Google Cloud ecosystem. Preferred technical and professional experience Stay updated with the latest trends and advancements in cloud technologies, frameworks, and tools. Conduct code reviews and provide constructive feedback to maintain code quality and ensure adherence to best practices. Troubleshoot and debug issues, and deploy applications to the cloud platform
Posted 2 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS/Azure/GCP Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 2 weeks ago
3.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Project Role : Cloud Services Engineer Project Role Description : Act as liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Must have skills : SUSE Linux Administration Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Services Engineer, you will act as a liaison between the client and Accenture operations teams for support and escalations. You will communicate service delivery health to all stakeholders, explain any performance issues or risks, and ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Proactively identify and address potential issues in Cloud services.- Collaborate with cross-functional teams to optimize Cloud orchestration processes.- Develop and implement strategies to enhance Cloud automation capabilities.- Analyze performance data to identify trends and areas for improvement.- Provide technical guidance and support to junior team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in SUSE Linux Administration.- Strong understanding of Cloud orchestration and automation.- Experience in managing and troubleshooting Cloud services.- Knowledge of scripting languages for automation tasks.- Hands-on experience with monitoring and alerting tools.- Good To Have Skills: Experience with DevOps practices. Additional Information:- The candidate should have a minimum of 3 years of experience in SUSE Linux Administration.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
6.0 - 8.0 years
40 - 45 Lacs
Pune
Work from Office
: Job Title - Data Platform Engineer - Tech Lead Location - Pune, India Role Description DB Technology is a global team of tech specialists, spread across multiple trading hubs and tech centers. We have a strong focus on promoting technical excellence our engineers work at the forefront of financial services innovation using cutting-edge technologies. DB Pune location plays a prominent role in our global network of tech centers, it is well recognized for its engineering culture and strong drive to innovate. We are committed to building a diverse workforce and to creating excellent opportunities for talented engineers and technologists. Our tech teams and business units use agile ways of working to create best solutions for the financial markets. CB Data Services and Data Platform We are seeking an experienced Software Engineer with strong leadership skills to join our dynamic tech team. In this role, you will lead a group of engineers working on cutting-edge technologies in Hadoop, Big Data, GCP, Terraform, Big Query, Data Proc and data management. You will be responsible for overseeing the development of robust data pipelines, ensuring data quality, and implementing efficient data management solutions. Your leadership will be critical in driving innovation, ensuring high standards in data infrastructure, and mentoring team members. Your responsibilities will include working closely with data engineers, analysts, cross-functional teams, and other stakeholders to ensure that our data platform meets the needs of our organization and supports our data-driven initiatives. Join us in building and scaling our tech solutions including hybrid data platform to unlock new insights and drive business growth. If you are passionate about data engineering, we want to hear from you! Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel.You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Technical Leadership: Lead a cross-functional team of engineers in the design, development, and implementation of on prem and cloud-based data solutions. Provide hands-on technical guidance and mentorship to team members, fostering a culture of continuous learning and improvement. Collaborate with product management and stakeholders to define technical requirements and establish delivery priorities. . Architectural and Design Capabilities: Architect and implement scalable, efficient, and reliable data management solutions to support complex data workflows and analytics. Evaluate and recommend tools, technologies, and best practices to enhance the data platform. Drive the adoption of microservices, containerization, and serverless architectures within the team. Quality Assurance: Establish and enforce best practices in coding, testing, and deployment to maintain high-quality code standards. Oversee code reviews and provide constructive feedback to promote code quality and team growth. Your skills and experience Technical Skills: Bachelor's or Masters degree in Computer Science, Engineering, or related field. 7+ years of experience in software engineering, with a focus on Big Data and GCP technologies such as Hadoop, PySpark, Terraform, BigQuery, DataProc and data management. Proven experience in leading software engineering teams, with a focus on mentorship, guidance, and team growth. Strong expertise in designing and implementing data pipelines, including ETL processes and real-time data processing. Hands-on experience with Hadoop ecosystem tools such as HDFS, MapReduce, Hive, Pig, and Spark. Hands on experience with cloud platform particularly Google Cloud Platform (GCP), and its data management services (e.g., Terraform, BigQuery, Cloud Dataflow, Cloud Dataproc, Cloud Storage). Solid understanding of data quality management and best practices for ensuring data integrity. Familiarity with containerization and orchestration tools such as Docker and Kubernetes is a plus. Excellent problem-solving skills and the ability to troubleshoot complex systems. Strong communication skills and the ability to collaborate with both technical and non-technical stakeholders Leadership Abilities: Proven experience in leading technical teams, with a track record of delivering complex projects on time and within scope. Ability to inspire and motivate team members, promoting a collaborative and innovative work environment. Strong problem-solving skills and the ability to make data-driven decisions under pressure. Excellent communication and collaboration skills. Proactive mindset, attention to details, and constant desire to improve and innovate. How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 weeks ago
5.0 - 7.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)
Posted 2 weeks ago
12.0 - 15.0 years
40 - 45 Lacs
Chennai
Work from Office
Skill & Experience Strategic Planning and Direction Maintain architecture principles, guidelines and standards Project & Program Management Data Warehousing Big Data Data Analytics &; Data Science for solutioning Expert in Big Query, Dataproc, Data Fusion, Dataflow, Bigtable, Fire Store, CloudSQL, Cloud Spanner, Google Cloud Storage, Cloud Composer, Cloud Interconnect, Etc Strong Experience in Big Data- Data Modelling, Design, Architecting & Solutioning Understands programming language like SQL, Python, R-Scala. Good Python skills, - Experience from data visualisation tools such as Google Data Studio or Power BI Knowledge in A/B Testing, Statistics, Google Cloud Platform, Google Big Query, Agile Development, DevOps, Date Engineering, ETL Data Processing Strong Migration experience of production Hadoop Cluster to Google Cloud. Experience in designing & mplementing solution in mentioned areas:Strong Google Cloud Platform Data Components BigQuery, BigTable, CloudSQL, Dataproc, Data Flow, Data Fusion, Etc
Posted 2 weeks ago
2.0 - 7.0 years
4 - 9 Lacs
Coimbatore
Work from Office
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : GCP Dataflow, Google Pub/Sub, Google Dataproc Minimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are seeking a skilled GCP Data Engineer to join our dynamic team. The ideal candidate will design, build, and maintain scalable data pipelines and solutions on Google Cloud Platform (GCP). This role requires expertise in cloud-based data engineering and hands-on experience with GCP tools and services, ensuring efficient data integration, transformation, and storage for various business use cases.________________________________________ Roles & Responsibilities: Design, develop, and deploy data pipelines using GCP services such as Dataflow, BigQuery, Pub/Sub, and Cloud Storage. Optimize and monitor data workflows for performance, scalability, and reliability. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and implement solutions. Implement data security and governance measures, ensuring compliance with industry standards. Automate data workflows and processes for operational efficiency. Troubleshoot and resolve technical issues related to data pipelines and platforms. Document technical designs, processes, and best practices to ensure maintainability and knowledge sharing.________________________________________ Professional & Technical Skills:a) Must Have: Proficiency in GCP tools such as BigQuery, Dataflow, Pub/Sub, Cloud Composer, and Cloud Storage. Expertise in SQL and experience with data modeling and query optimization. Solid programming skills in Python ofor data processing and ETL development. Experience with CI/CD pipelines and version control systems (e.g., Git). Knowledge of data warehousing concepts, ELT/ETL processes, and real-time streaming. Strong understanding of data security, encryption, and IAM policies on GCP.b) Good to Have: Experience with Dialogflow or CCAI tools Knowledge of machine learning pipelines and integration with AI/ML services on GCP. Certifications such as Google Professional Data Engineer or Google Cloud Architect.________________________________________ Additional Information: - The candidate should have a minimum of 3 years of experience in Google Cloud Machine Learning Services and overall Experience is 3- 5 years - The ideal candidate will possess a strong educational background in computer science, mathematics, or a related field, along with a proven track record of delivering impactful data-driven solutions. Qualifications 15 years full time education
Posted 2 weeks ago
7.0 - 12.0 years
20 - 25 Lacs
Chennai, Bengaluru
Work from Office
We are looking for a Senior GCP Data Engineer / GCP Technical Lead with strong expertise in Google Cloud Platform (GCP), Apache Spark, and Python to join our growing data engineering team. The ideal candidate will have extensive experience working with GCP data services and should be capable of leading technical teams, designing robust data pipelines, and interacting directly with clients to gather requirements and ensure project delivery. Project Duration : 1 year and extendable Role & responsibilities Design, develop, and deploy scalable data pipelines and solutions using GCP services like DataProc and BigQuery. Lead and mentor a team of data engineers to ensure high-quality deliverables. Collaborate with cross-functional teams and client stakeholders to define technical requirements and deliver solutions aligned with business goals. Optimize data processing and transformation workflows for performance and cost-efficiency. Ensure adherence to best practices in cloud data architecture, data security, and governance. Mandatory Skills: Google Cloud Platform (GCP) especially DataProc and BigQuery Apache Spark Python Programming Preferred Skills: Experience in working with large-scale data processing frameworks. Exposure to DevOps/CI-CD practices in a cloud environment. Hands-on experience with other GCP tools like Cloud Composer, Pub/Sub, or Cloud Storage is a plus. Soft Skills: Strong communication and client interaction skills. Ability to work independently and as part of a distributed team. Excellent problem-solving and team management capabilities.
Posted 2 weeks ago
5.0 - 8.0 years
15 - 22 Lacs
Chennai, Bengaluru
Work from Office
•Minimum 4+ years of experience implementing data migration programs from Hadoop with Java, Spark to GCP BigQuery and Dataproc •Minimum experience of 4+ years in Integrating plugins for GitHub Action to CICD platform to ensure software quality
Posted 2 weeks ago
6.0 - 8.0 years
18 - 20 Lacs
Chennai
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 2 weeks ago
5.0 - 7.0 years
15 - 20 Lacs
Chennai
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow) Mandatory Key Skills agile development,Data Processing,Python,Shell Script,SQL,Google Cloud Platform*,GCS*,DataProc*,Big Query*,Data Flow*
Posted 2 weeks ago
5.0 - 7.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Title: Senior Data Engineer / Technical Lead Location: Bangalore Employment Type: Full-time Role Summary We are seeking a highly skilled and motivated Senior Data Engineer/Technical Lead to take ownership of the end-to-end delivery of a key project involving data lake transitions, data warehouse maintenance, and enhancement initiatives. The ideal candidate will bring strong technical leadership, excellent communication skills, and hands-on expertise with modern data engineering tools and platforms. Experience in Databricks and JIRA is highly desirable. Knowledge of supply chain and finance domains is a plus, or a willingness to quickly ramp up in these areas is expected. Key Responsibilities Delivery Management Lead and manage data lake transition initiatives under the Gold framework. Oversee delivery of enhancements and defect fixes related to the enterprise data warehouse. Technical Leadership Design and develop efficient, scalable data pipelines using Python, PySpark , and SQL . Ensure adherence to coding standards, performance benchmarks, and data quality goals. Conduct performance tuning and infrastructure optimization for data solutions. Provide code reviews, mentorship, and technical guidance to the engineering team. Collaboration & Stakeholder Engagement Collaborate with business stakeholders (particularly the Laboratory Products team) to gather, interpret, and refine requirements. Communicate technical solutions and project progress clearly to both technical and non-technical audiences. Tooling and Technology Use Leverage tools such as Databricks, Informatica, AWS Glue, Google DataProc , and Airflow for ETL and data integration. Use JIRA to manage project workflows, track defects, and report progress. Documentation and Best Practices Create and review documentation including architecture, design, testing, and deployment artifacts. Define and promote reusable templates, checklists, and best practices for data engineering tasks. Domain Adaptation Apply or gain knowledge in supply chain and finance domains to enhance project outcomes and align with business needs. Skills and Qualifications Technical Proficiency Strong hands-on experience in Python, PySpark , and SQL . Expertise with ETL tools such as Informatica, AWS Glue, Databricks , and Google Cloud DataProc . Deep understanding of data warehousing solutions (e.g., Snowflake, BigQuery, Delta Lake, Lakehouse architectures ). Familiarity with performance tuning, cost optimization, and data modeling best practices. Platform & Tools Proficient in working with cloud platforms like AWS, Azure, or Google Cloud . Experience in version control and configuration management practices. Working knowledge of JIRA and Agile methodologies. Certifications (Preferred but not required) Certifications in cloud technologies, ETL platforms, or relevant domain (e.g., AWS Data Engineer, Databricks Data Engineer, Supply Chain certification). Expected Outcomes Timely and high-quality delivery of data engineering solutions. Reduction in production defects and improved pipeline performance. Increased team efficiency through reuse of components and automation. Positive stakeholder feedback and high team engagement. Consistent adherence to SLAs, security policies, and compliance guidelines. Performance Metrics Adherence to project timelines and engineering standards Reduction in post-release defects and production issues Improvement in data pipeline efficiency and resource utilization Resolution time for pipeline failures and data issues Completion of required certifications and training Preferred Background Background or exposure to supply chain or finance domains Willingness to work during morning US East hours Ability to work independently and drive initiatives with minimal oversight Required Skills Databricks,Data Warehousing,ETL,SQL
Posted 2 weeks ago
1.0 - 5.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job TitleData Engineer Experience5"“8 Years LocationDelhi, Pune, Bangalore (Hyderabad & Chennai also acceptable) Time ZoneAligned with UK Time Zone Notice PeriodImmediate Joiners Only Role Overview: We are seeking experienced Data Engineers to design, develop, and optimize large-scale data processing systems You will play a key role in building scalable, efficient, and reliable data pipelines in a cloud-native environment, leveraging your expertise in GCP, BigQuery, Dataflow, Dataproc, and more Key Responsibilities: Design, build, and manage scalable and reliable data pipelines for real-time and batch processing. Implement robust data processing solutions using GCP services and open-source technologies. Create efficient data models and write high-performance analytics queries. Optimize pipelines for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineering teams to ensure smooth data integration and transformation. Maintain high data quality, enforce validation rules, and set up monitoring and alerting. Participate in code reviews, deployment activities, and provide production support. Technical Skills Required: Cloud PlatformsGCP (Google Cloud Platform)- mandatory Key GCP ServicesDataproc, BigQuery, Dataflow Programming LanguagesPython, Java, PySpark Data Engineering ConceptsData Ingestion, Change Data Capture (CDC), ETL/ELT pipeline design Strong understanding of distributed computing, data structures, and performance tuning Required Qualifications & Attributes: 5"“8 years of hands-on experience in data engineering roles Proficiency in building and optimizing distributed data pipelines Solid grasp of data governance and security best practices in cloud environments Strong analytical and problem-solving skills Effective verbal and written communication skills Proven ability to work independently and in cross-functional teams Show more Show less
Posted 2 weeks ago
10.0 - 15.0 years
30 - 40 Lacs
Bhopal, Pune, Gurugram
Hybrid
Job Title: Senior Data Engineer GCP | Big Data | Airflow | dbt Company: Xebia Location: All Xebia locations Experience: 10+ Years Employment Type: Full Time Notice Period: Immediate to Max 30 Days Only Job Summary Join the digital transformation journey of one of the world’s most iconic global retail brands! As a Senior Data Engineer , you’ll be part of a dynamic Digital Technology organization, helping build modern, scalable, and reliable data products to power business decisions across the Americas. You'll work in the Operations Data Domain, focused on ingesting, processing, and optimizing high-volume data pipelines using Google Cloud Platform (GCP) and other modern tools. Key Responsibilities Design, develop, and maintain highly scalable big data pipelines (batch & streaming) Collaborate with cross-functional teams to understand data needs and deliver efficient solutions Architect robust data solutions using GCP-native services (BigQuery, Pub/Sub, Cloud Functions, etc.) Build and manage modern Data Lake/Lakehouse platforms Create frameworks and reusable components for scalable ingestion and processing Implement data governance, security, and ensure regulatory compliance Mentor junior engineers and lead an offshore team of 8+ engineers Monitor pipeline performance, troubleshoot bottlenecks, and ensure data quality Engage in code reviews, CI/CD deployments, and agile product releases Contribute to internal best practices and engineering standards Must-Have Skills & Qualifications 8+ years in data engineering with strong hands-on experience in production-grade pipelines Expertise in GCP Data Services – BigQuery, Vertex AI, Pub/Sub, etc. Proficiency in dbt (Data Build Tool) for data transformation Strong programming skills in Python, Java, or Scala Advanced SQL & NoSQL knowledge Experience with Apache Airflow for orchestration Hands-on with Git, GitHub Actions , Jenkins for CI/CD Solid understanding of data warehousing (BigQuery, Snowflake, Redshift) Exposure to tools like Hadoop, Spark, Kafka , Databricks (nice to have) Familiarity with BI tools like Tableau, Power BI, or Looker (optional) Strong leadership qualities to manage offshore engineering teams Excellent communication skills and stakeholder management experience Preferred Education Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field Notice Period Requirement Only Immediate Joiners or candidates with Max 30 Days Notice Period will be considered. How to Apply If you are passionate about solving real-world data problems and want to be part of a global data-driven transformation, apply now by sending your resume to vijay.s@xebia.com with the subject line: "Sr Data Engineer Application – [Your Name]" Kindly include the following details in your email: Full Name Total Experience Current CTC Expected CTC Current Location Preferred Location Notice Period / Last Working Day Key Skills Please do not apply if you are currently in process with any other role at Xebia or have recently interviewed.
Posted 3 weeks ago
5.0 - 10.0 years
15 - 27 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Job Title: Data Engineer GCP Company: Xebia Location: Hybrid - Any Xebia Location Experience: 5+ Years Salary: As per industry standards Job Type: Full Time About the Role: Xebia is hiring a seasoned Data Engineer (L4) to join a high-impact team building scalable data platforms using GCP, Databricks, and Airflow . If you thrive on architecting future-ready solutions and have strong experience in big data transformations, we’d love to hear from you. Project Overview: We currently manage 1000+ Data Pipelines using Databricks Clusters for end-to-end data transformation ( Raw Silver Gold ) with orchestration handled via Airflow — all on Google Cloud Platform (GCP) . Curated datasets are delivered through BigQuery and Databricks Notebooks . Our roadmap includes migrating to a GCP-native data processing framework optimized for Spark workloads. Key Responsibilities: Design and implement a GCP-native data processing framework Analyze and plan migration of existing workloads to cloud-native architecture Ensure data availability, integrity, and consistency Build reusable tools and standards for the Data Engineering team Collaborate with stakeholders and document processes thoroughly Required Experience: 5+ years in Data Engineering with strong data architecture experience Hands-on expertise in Databricks , Airflow , BigQuery , and PySpark Deep knowledge of GCP services for data processing (Dataflow, Dataproc, etc.) Familiarity with data lake table formats like Delta, Iceberg Experience with orchestration tools ( Airflow , Dagster , or similar) Key Skills: Python programming Strong understanding of data lake architectures and cloud-native best practices Excellent problem-solving and communication skills Notice Period Requirement: Only Immediate Joiners or Candidates with Max 30 Days Notice Period Will Be Considered How to Apply: Interested candidates can share their details and updated resume with vijay.s@xebia.com in the following format: Full Name: Total Experience (Must be 5+ years): Current CTC: Expected CTC: Current Location: Preferred Location: Notice Period / Last Working Day (if serving notice): Primary Skill Set: Note: Please apply only if you have not recently applied or interviewed for any open roles at Xebia.
Posted 3 weeks ago
1.0 - 3.0 years
10 - 15 Lacs
Kolkata, Gurugram, Bengaluru
Hybrid
Salary: 10 to 16 LPA Exp: 1 to 3 years Location: Gurgaon / Bangalore/ Kolkata (Hybrid) Notice: Immediate to 30 days..!! Key Skills: GCP, Cloud, Pubsub, Data Engineer
Posted 3 weeks ago
3.0 - 8.0 years
15 - 30 Lacs
Gurugram, Bengaluru
Hybrid
Salary: 15 to 30 LPA Exp: 3 to 8 years Location: Gurgaon / Bangalore (Hybrid) Notice: Immediate to 30 days..!! Key Skills: GCP, Cloud, Pubsub, Data Engineer
Posted 3 weeks ago
3.0 - 6.0 years
5 - 9 Lacs
Pune
Work from Office
Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2